PostgreSQL tuning best practices for data warehousing - postgresql

I have found plenty of online and print guides on how to tune and optimize performance for Postgres for OLTP applications, but I haven't found anything of the sort specific to Data Warehousing applications. Since there are so many differences in the types of workload, I'm sure there has to be some differences in how the databases are managed and tuned.
Some of my own:
I have found from the DDL side that I use indexes a lot more liberally, since I usually only worry about inserts once a day and can do batch inserts with index rebuilds.
I will typically use integer surrogate keys to data that typically has more than one natural key for faster joins
I will usually define and maintain a very comprehensive date table that has prebuilt date manipulations (fiscal date as opposed to calendar date, fiscal year-month, starting day of the week, etc) and use it liberally as opposed to using functions in select statements and where statements. This usually helps during CPU-bound aggregate queries.
I was hoping that I would find some information on memory management and other database settings, but I would be happy to hear any useful best practices specific to Postgres-based Data Warehousing.

My experience (admittedly on a pretty small scale when it comes to data warehouses):
Like you mention, pre-aggregating data is easily the most important thing, as it reduces the amount of data that needs to be read by many orders of magnitude.
Avoid short writing transactions, subtransactions and savepoints. This includes exception handling in PL/pgSQL. These burn through the available "transaction ID" space quickly, and cause expensive "wraparound" vacuums that need to rewrite whole tables.
I found that partitioning tables such that each partition individually can fit in the kernel's cache is good for maintenance and migrations, if you ever need to do any. This means you can recreate all indexes on a partition with just 1 seq scan from disk, instead of one scan for each index.
Like Chris already mentioned, be generous with work_mem and maintenance_work_mem; if your workload doesn't fit in RAM then keeping more temporary data in memory saves I/O and CPU time due to smarter query plans (most importantly HashAggregate).
If you need to do huge sorts, it can help to buy a dedicated SSD for storing the temporary files.

From a memory management perspective one of your largest differences is that you can often hope to keep the working OLTP set in memory while this is not the case with OLAP environments. Additionally very often your joined sets are bigger. This means higher work_mem settings can be very helpful and to the extent tables are denormalized this means one can push work_mem a bit higher than it might be otherwise. I am not sure my advice on shared_buffers would change (I prefer to start low and increase, testing performance at each step) but work_mem certainly would need to increase if you are doing reporting on sets of any size.

Related

Table scan vs GSI when not scanning a lot

Is there any metric for when to choose a full table scan over GSI or the other way around?
I know the basic concept behind both but the pricing model for GSI is very dependent on the table itself that i'm having hard time deciding
and more importantly, how would that scale with different table sizes, or how much scanning is too inefficient and requiring a GSI instead
By the by, I'm having a hard time finding good resources for filtering expressions for the query and scan on dynamodb, any good recommendations? ("#v >= :num" this is what i mean, probably not searching with the correct term)
In general the decision to use a query versus a scan comes down to how much of the data you need. If the answer is that you need most of the data (which in practice can only really be case for relatively small tables) then use a scan. Otherwise, use a query — pretty much every time.
It’s impossible to give a hard threshold for what ‘most of the data’ means. I’d say definitely more than 50% and that this threshold tends to 100% as the table size grows.
The exception to the above would be one-off operations that can be performed in the background and where you you’re willing to trade time for cost. And the corollary, that if you are getting data for a customer facing request your aim should be to read as little from the database as possible to keep request times short.
All this being said, parallel scans can be super quick to pull in a lot of data, if you really need it and you are in a position to consume extra capacity. Even on largish tables, as long as you have the capacity to spare you can pull in hundreds of thousands of items in just a few seconds.

RDMS Denormalization of tables for key vaue pair information storage

I have four tables as:
ProductAttribute - Stored Product Attribute (Color, Size, etc.)
ProductAttributeValue - Stored Product Attribute Value (Green, 10, etc.)
MapProductAttributeValue - Stored relation between Product Attribute and its Values (COlor-Green, COlor-Blue)
MapProductAndAttributeValue - Stored relation between Product table and MapProductAttributeValue table
How can I denormalize this schema MySQL? i do not want to go for NOSQL.
i want to use RDBMS approach only or can i have some different storage mechanism?
Well, you don't explain the problem you have in details, but as it is most likely performance related, there are some options without going to NoSQL or rather NewSQL.
It seems like you have some kind of a product database "with flavors", and modern systems should be able to handle HUGE product trees with normalized DB, provided that the application is a good citizen.
Before going to denormalizing the database, I say something, that you may have done already, but as your question does not have those details in place, just in case I add some things to be considered:
Your problem is most likely an io-bottleneck. Can you give any details of distribution of IO over different spindles? As performance impact from io-bootleneck (or maybe memory, which is even worse) as primary bottleneck is exponential instead of linear, off-loading and/or distributing the IO is first thing to check.
Have you analyzed the io-profile in details? Are you using "plain" MySQL or InnoDB?
Optimize the RAM use to cache the database. What is the size of DB? Few gigabytes of memory is cheap and buys you time to truly understand the challenge. Remember: Locality is a challenge, if your demand exceeds the resource (io/RAM) by 10% you might get a performance penalty of 90%. By solving the primary and secondary bottlenecks AND setting right parameters for your RDBMS, you may see a HUGE difference.
If none of above works, consider using SQL compliant IMDB (in-memory database) which most have quite sophisticated algorithms to optimize the "hot spots" of data.
Only if you have done/considered all of above I would go to denormalizing the database, unless it is brain dead from a beginning ;-)
So in short: Monitor the system, collect the evidence where the bottleneck is and then solve the true problem, what ever it might be.
cheers, //Jari

PostgreSQL temporary table cache in memory?

Context:
I want to store some temporary results in some temporary tables. These tables may be reused in several queries that may occur close in time, but at some point the evolutionary algorithm I'm using may not need some old tables any more and keep generating new tables. There will be several queries, possibly concurrently, using those tables. Only one user doing all those queries. I don't know if that clarifies everything about sessions and so on, I'm still uncertain about how that works.
Objective:
What I would like to do is to create temporary tables (if they don't exist already), store them on memory as far as that is possible and if at some point there is not enough memory, delete those that would be committed to the HDD (I guess those will be the least recently used).
Examples:
The client will be doing queries for EMAs with different parameters and an aggregation of them with different coefficients, each individual may vary in terms of the coefficients used and so the parameters for the EMAs may repeat as they are still in the gene pool, and may not be needed after a while. There will be similar queries with more parameters and the genetic algorithm will find the right values for the parameters.
Questions:
Is that what "on commit drop" means? I've seen descriptions about
sessions and transactions but I don't really understand those
concepts. Sorry if the question is stupid.
If it is not, do you know about any simple way to get Postgres to do
this?
Workaround:
In the worst case I should be able to make a guesstimation about how many tables I can keep on memory and try to implement the LRU by myself, but it's never going to be as good as what Postgres could do.
Thank you very much.
This is a complicated topic and probably one to discuss in some depth. I think it is worth both explaining why PostgreSQL doesn't support this and also what you can do instead with recent versions to approach what you are trying to do.
PostgreSQL has a pretty good approach to caching diverse data sets across multiple users. In general you don't want to allow a programmer to specify that a temporary table must be kept in memory if it becomes very large. Temporary tables however are managed quite differently from normal tables in that they are:
Buffered by the individual back-end, not the shared buffers
Locally visible only, and
Unlogged.
What this means is that typically you aren't generating a lot of disk I/O for temporary tables. The tables do not normally flush WAL segments, and they are managed by the local back-end so they don't affect shared buffer usage. This means that only occasionally is data going to be written to disk and only when necessary to free memory for other (usually more frequent) tasks. You certainly aren't forcing disk writes and only need disk reads when something else has used up memory.
The end result is that you don't really need to worry about this. PostgreSQL already tries, to a certain extent, to do what you are asking it to do, and temporary tables have much lower disk I/O requirements than standard tables do. It does not force the tables to stay in memory though and if they become large enough, the pages may expire into the OS disk cache, and eventually on to disk. This is an important feature because it ensures that performance gracefully degrades when many people create many large temporary tables.

Extremely high QPS - DynamoDB vs MongoDB vs other noSQL?

We are building a system that will need to serve loads of small requests from day one. By "loads" I mean ~5,000 queries per second. For each query we need to retrieve ~20 records from noSQL database. There will be two batch reads - 3-4 records at first and then 16-17 reads instantly after that (based on the result of first read). That would be ~100,000 objects to read per second.
Until now we were thinking about using DynamoDB for this as it's really easy to start with.
Storage is not something I would be worried about as the objects will be really tiny.
What I am worried about is cost of reads. DynamoDB costs $0.0113 per hour per 100 eventually consistent (which is fine for us) reads per second. That is $11,3 per hour for us provided that all objects are up to 1KB in size. And that would be $5424 per month based on 16 hours/day average usage.
So... $5424 per month.
I would consider other options but I am worried about maintenance issues, costs etc. I have never worked with such setups before so your advice would be really valuable.
What would be the most cost-effective (yet still hassle-free) solution for such read/write intensive application?
From your description above, I'm assuming that your 5,000 queries per second are entirely read operations. This is essentially what we'd call a data warehouse use case. What are your availability requirements? Does it have to be hosted on AWS and friends, or can you buy your own hardware to run in-house? What does your data look like? What does the logic which consumes this data look like?
You might get the sense that there really isn't enough information here to answer the question definitively, but I can at least offer some advice.
First, if your data is relatively small and your queries are simple, save yourself some hassle and make sure you're querying from RAM instead of disk. Any modern RDBMS with support for in-memory caching/tablespaces will do the trick. Postgres and MySQL both have features for this. In the case of Postgres make sure you've tuned the memory parameters appropriately as the out-of-the-box configuration is designed to run on pretty meager hardware. If you must use a NoSQL option, depending on the structure of your data Redis is probably a good choice (it's also primarily in-memory). However in order to say which flavor of NoSQL might be the best fit we'd need to know more about the structure of the data that you're querying, and what queries you're running.
If the queries boil down to SELECT * FROM table WHERE primary_key = {CONSTANT} - don't bother messing with NoSQL - just use an RDBMS and learn how to tune the dang thing. This is doubly true if you can run it on your own hardware. If the connection count is high, use read slaves to balance the load.
Long-after-the-fact Edit (5/7/2013):
Something I should've mentioned before: EC2 is a really really crappy place to measure performance of self-managed database nodes. Unless you're paying out the nose, your I/O perf will be terrible. Your choices are to either pay big money for provisioned IOPS, RAID together a bunch of EBS volumes, or rely on ephemeral storage whilst syncing a WAL off to S3 or similar. All of these options are expensive and difficult to maintain. All of these options have varying degrees of performance.
I discovered this for a recent project, so I switched to Rackspace. The performance increased tremendously there, but I noticed that I was paying a lot for CPU and RAM resources when really I just need fast I/O. Now I host with Digital Ocean. All of DO's storage is SSD. Their CPU performance is kind of crappy in comparison to other offerings, but I'm incredibly I/O bound so I Just Don't Care. After dropping Postgres' random_page_cost to 2, I'm humming along quite nicely.
Moral of the story: profile, tune, repeat. Ask yourself what-if questions and constantly validate your assumptions.
Another long-after-the-fact-edit (11/23/2013): As an example of what I'm describing here, check out the following article for an example of using MySQL 5.7 with the InnoDB memcached plugin to achieve 1M QPS: http://dimitrik.free.fr/blog/archives/11-01-2013_11-30-2013.html#2013-11-22
By "loads" I mean ~5,000 queries per second.
Ah that's not so much, even SQL can handle that. So you are already easily within the limits of what most modern DBs can handle. However they can only handle this with the right:
Indexes
Queries
Server Hardware
Splitting of large data (you might require a large amount of shards with relatively low data each, dependant here so I said "might")
That would be ~100,000 objects to read per second.
Now that's more of a high load scenario. Must you read these in such a fragmented manner? If so then (as I said) you may require to look into spreading the load across replicated shards.
Storage is not something I would be worried about as the objects will be really tiny.
Mongo is aggresive with disk allocation so even with small objects it will still pre-allocate a lot of space, this is something to bare in mind.
So... $5424 per month.
Oh yea the billing thrills of Amazon :\.
I would consider other options but I am worried about maintenance issues, costs etc. I have never worked with such setups before so your advice would be really valuable.
Now you hit the snag of it all. You can setup your own cluster but then you might end up paying that much in money and time (or way more) for the servers, people, admins and your own mantenance time. This is one reason why DynamoDB really shines here. For large setups who are looking to take the load and pain and stress of server management (trust me it is really painful, if your a Dev you might as well change your job title to server admin from now on) off of the company.
Considering to setup this yourself you would need:
A considerable amount of EC instances (dependant upon data and index size but I would say close to maybe 30?)
A server admin (maybe 2, maybe freelance?)
Both of which could set you back 100's of thousands of pounds a year, I would personally bet for the managed approach if it fits your needs and budget. When your need grows beyond what managed Amazon DB can give you then move to your infrastructure.
Edit
I should amend that the cost effectiveness was done with quite a few black holes for example:
I am unsure of the amount of data you have
I am unsure of writes
Both of these contribute me to place a scenario of:
Massive writes (about as much as your reading)
Massive data (lots)
Here is what I recommend in sequence.
Identify your use case and choose the correct db. We test MySQL and MongoDb regularly for all kinds of workloads (OLTP, Analytics, etc). In all cases we have tested with, MySQL outperforms MongoDb and is cheaper ($/TPS) compared to MongoDb. MongoDb has other advantages but that is another story ... since we are talking about performance here.
Try to cache your queries in RAM (by provisioning adequate RAM).
If you are bottle necked on RAM, then you can try a SSD caching solution which takes advantage of ephemeral SSD. This works if your workload is cache friendly. You can save loads of money as ephemeral SSD is typically not charged by the cloud provider.
Try PIOPS/RAID or a combination to create adequate IOPS for your application.

mongoDB vs relational databases when data can't fit into memory?

First of all, I apologize for my potentially shallow understanding of NoSQL architecture (and databases in general) so try to bear with me.
I'm thinking of using mongoDB to store resources associated with an UUID. The resources can be things such as large image files (tens of megabytes) so it makes sense to store them as files and store just links in my database along with the associated metadata. There's also the added flexibility of decoupling the actual location of the resource files, so I can use a different third party to store the files if I need to.
Now, one document which describes resources would be about 1kB. At first I except a couple hundred thousands of resource documents which would equal some hundreds of megabytes in database size, easily fitting into server memory. But in the future I might have to scale this into the order of tens of MILLIONS of documents. This would be tens of gigabytes which I can't squeeze into server memory anymore.
Only the index could still fit in memory being around a gigabyte or two. But if I understand correctly, I'd have to read from disk every time I did a lookup on an UUID. Is there a substantial speed benefit from mongoDB over a traditional relational database in such a situation?
BONUS QUESTION: is there an existing, established way of doing what I'm trying to achieve? :)
MongoDB doesn't suddenly become slow the second the entire database no longer fits into physical memory. MongoDB currently uses a storage engine based on memory mapped files. This means data that is accessed often will usually be in memory (OS managed, but assume a LRU scheme or something similar).
As such it may not slow down at all at that point or only slightly, it really depends on your data access patterns. Similar story with indexes, if you (right) balance your index appropriately and if your use case allows it you can have a huge index with only a fraction of it in physical memory and still have very decent performance with the majority of index hits happening in physical memory.
Because you're talking about UUID's this might all be a bit hard to achieve since there's no guarantee that the same limited group of users are generating the vast majority of throughput. In those cases sharding really is the most appropriate way to maintain quality of service.
This would be tens of gigabytes which I can't squeeze into server
memory anymore.
That's why MongoDB gives you sharding to partition your data across multiple mongod instances (or replica sets).
In addition to considering sharding, or maybe even before, you should also try to use covered indexes as much as possible, especially if it fits your Use cases.
This way you do not HAVE to load entire documents into memory. Your indexes can help out.
http://www.mongodb.org/display/DOCS/Retrieving+a+Subset+of+Fields#RetrievingaSubsetofFields-CoveredIndexes
If you have to display your entire document all the time based on the id, then the general rule of thumb is to attempt to keep e working set in memory.
http://blog.boxedice.com/2010/12/13/mongodb-monitoring-keep-in-it-ram/
This is one of the resources that talks about that. There is a video on mongodb's site too that speaks about this.
By attempting to size the ram so that the working set is in memory, and also looking at sharding, you will not have to do this right away, you can always add sharding later. This will improve scalability of your app over time.
Again, these are not absolute statements, these are general guidelines, that you should think through your usage patterns and make sure that they ar relevant to what you are doing.
Personally, I have not had the need to fit everything in ram.