Table scan vs GSI when not scanning a lot - nosql

Is there any metric for when to choose a full table scan over GSI or the other way around?
I know the basic concept behind both but the pricing model for GSI is very dependent on the table itself that i'm having hard time deciding
and more importantly, how would that scale with different table sizes, or how much scanning is too inefficient and requiring a GSI instead
By the by, I'm having a hard time finding good resources for filtering expressions for the query and scan on dynamodb, any good recommendations? ("#v >= :num" this is what i mean, probably not searching with the correct term)

In general the decision to use a query versus a scan comes down to how much of the data you need. If the answer is that you need most of the data (which in practice can only really be case for relatively small tables) then use a scan. Otherwise, use a query — pretty much every time.
It’s impossible to give a hard threshold for what ‘most of the data’ means. I’d say definitely more than 50% and that this threshold tends to 100% as the table size grows.
The exception to the above would be one-off operations that can be performed in the background and where you you’re willing to trade time for cost. And the corollary, that if you are getting data for a customer facing request your aim should be to read as little from the database as possible to keep request times short.
All this being said, parallel scans can be super quick to pull in a lot of data, if you really need it and you are in a position to consume extra capacity. Even on largish tables, as long as you have the capacity to spare you can pull in hundreds of thousands of items in just a few seconds.

Related

Postgres not using index when too much concurrent write load

Recently I am facing with situation when in simple query with one where filter postgres doesn't use index. Query like this select * from book where obj_id=465789. Sometimes we have a lot of writes to this table and selects simultaneously. I read this article Postgres not using index when index scan is much better option and Erwin gave excellent answer. But one thing I didn't understand. How too much concurrent write load affects use index or not ?
The planner does not ponder the how much concurrent writing there is when making its decisions, so there is no direct effect.
I can think of three possible indirect effects. Concurrent writers might keep the data distribution changing faster than auto-analyze can keep up, so the planner is working with bad row estimates (how many rows have obj_id=465789 right now). Intense writing can clear the visibility map faster than autovacuum can reset it, which will penalize index-only scan cost estimates. And intense writing can bloat the index, and the index size plays a (minor) role in estimating index cost.

Is it bad practice to keep everything in one table?

Looking for some feedback - I am building a social networking type software- one of the features allows users to post news stories and have friends comment. I have in the past kept different tables for things like news, comments, calendar events, etc. However a friend has turned me to the wordpress-type database structure of "POSTS" and "post_types" where everything is in one table and has a "post_type".
This would mean that news stories, comments, events, etc are all in the same table. I love the efficiency of creating functions that are updating one table. HOWEVER, a single table in my old software was 1.5MILLION rows, I'd expect this new table to grow to about 10Million in the first year.
Does mysql handle this size of data okay as long as indexes are properly set, or is it smarter to break everything into seperate tables for this reason?
There is no general answer. It depends.
MySQL has no problem dealing with large tables. However, it will not do miracles for you. In the end, it's all about efficiency. It means you need to optimize your design for multiple, mutually exclusive goals. What you want to find is a sweet spot between complexity, performance, extensibility and maintenance costs. This is different for every project and is kind of an art.
Generally don't want to mix things that are too different. This is why they teach about data normalization in just about every database book or CS course. If your data is small, this does not really matter. But if you have a lot of data and a lot of requests, you will almost certainly want to squeeze every last drop of performance from your database. So not only will you be separating tables, scrutinizing indexes, inspecting execution plans, updating statistics, defragmenting pages and measuring performance, but you will also be using partitioning, clustering, materialized views, read-only replicas, I/O and CPU parallelism, SSDs, Memcached and a variety of other tools. This will all be much more challenging if you have started with a bad data model. In my personal experience, locking is something that really bites you in the ass with large tables, unless you can somehow live without transactions.
To make any kinds of estimations, you need to have some performance baseline. Just knowing number of records is not enough. How many requests will there be? What will the queries be doing? Where do you expect the heaviest load? Can you prepare the most common queries that the system will be running most of the time? What about peak hours? What hardware will be available to run this load? What is the ratio of reads to writes? Etc.
To make optimizations, you need some kind of goal. As always, you will find out that in order to get there, you have to sacrifice something. Because you probably don't have all those answers yet, try following the principle of minimalism - start small, measure, analyze, improve, repeat.

Why don't most NoSQL DBMSs have “pointers”?

What is the objective reason fo why don't most NoSQL storage solutions have some kind of "pointers" for ultra-efficient joins, like the pre-relational DBMSs had?
I mean, I partially understand the theoretical reasons for why classical RDBMSs have ditched pointers (need to update them and double sync them for memory and disk, no "disks" fast enough to be treatable like random-access for some use cases, like modern SSDs can, etc.).
But of the many NoSQL solutions out there, why do just so few of them realize that this model would be awesome (exception I know of would be OrientDB and Neo4j) for many practical cases, not only ones that need graph traversals. I mean, when you need things like multi-joins, you need to ping pong Mongo and do N queries instead of one.
Isn't the use case of a NoSQL document-db overlapping enough with the one of graph DBs that such a feature would make sense and would just provide all the practical features of SQL-joins to the NoSQL solutions with not much extra cost, and for most queries would make indexes useless, and take up much less space for huge datasets?
(...and as a bonus any NoSQL solution would be ready to use as a graph db, and doing a ~100 nodes path length traversal of a graph stored in Mongo would just automagically work fast enough)
I believe the key problem is data locality and horizontal scalability. A premise of NoSQL is that the read-heavy models of RBDMSs, i.e. those that require joins, lead to bottlenecks.
Think of Twitter: the original data model was read-heavy, but the joins you need to make are insanely large (billions of tweets x hundreds of millions of users x tens of billions of follower-followee relations that are wildly varying in size [1-10M, or whatever aplusk has these days]).
When even the ids you'll want to join don't fit in a reasonable machine's RAM, calculating the overlap of ids becomes terribly expensive. If you take the actual data into account, horizontal scalability becomes next to impossible because there's no a priori knowledge which shards / machines will need to be hit. Storing all follower pointers in every follower-list would require insane bookkeeping for trivial changes, while not exploiting creation-time locality (or at least, creation-time locality per feed).
In a multi-tenant application, you can always shard by the tenants, or by the sales region or by agents or maybe even by time: You can find some locality criterion that is good for like > 95% of the cases.
With graphs, that becomes a lot more complicated, especially those which have certain connection properties (scale-free networks with small diameter / small world phenomenon): A simple post, say by a celebrity, can quickly spread through a large portion of the entire network, meaning that practically every query must hit the one node that holds the post.
Sure, the post itself would be cached by the web servers, but add likes and comments, or favorites and retweets and the story becomes a nightmare (writes!) Add in notification emails, content ranking and filtering and you're in true horror.
doing a ~100 nodes path length traversal of a graph stored in Mongo would just automagically work fast enough
If that data happens to be on 100 different nodes, the sheer network overhead will be in the range of 50ms, even in a single datacenter with no congestion and idle machines. If this spreads across the world or individual queries take a little longer, you'll quickly end up at 5000ms. Also, the query would fail if only one machine is down.
This depends too much on the details of the network, which is why the problem should be solved by application code, not by the data store.
when you need things like multi-joins, you need to ping pong Mongo and do N queries instead of one
When you need multi-joins in MongoDB, you're using the wrong tool for your data model, or vice versa. Multi-Join means normalized means read-heavy which battles the key concept of MongoDB. However, you can store quite large association lists even in MongoDB. But the tool becomes almost irrelevant here: If you look at Facebook TAO, for instance, there's little technology dependence in that.

PostgreSQL tuning best practices for data warehousing

I have found plenty of online and print guides on how to tune and optimize performance for Postgres for OLTP applications, but I haven't found anything of the sort specific to Data Warehousing applications. Since there are so many differences in the types of workload, I'm sure there has to be some differences in how the databases are managed and tuned.
Some of my own:
I have found from the DDL side that I use indexes a lot more liberally, since I usually only worry about inserts once a day and can do batch inserts with index rebuilds.
I will typically use integer surrogate keys to data that typically has more than one natural key for faster joins
I will usually define and maintain a very comprehensive date table that has prebuilt date manipulations (fiscal date as opposed to calendar date, fiscal year-month, starting day of the week, etc) and use it liberally as opposed to using functions in select statements and where statements. This usually helps during CPU-bound aggregate queries.
I was hoping that I would find some information on memory management and other database settings, but I would be happy to hear any useful best practices specific to Postgres-based Data Warehousing.
My experience (admittedly on a pretty small scale when it comes to data warehouses):
Like you mention, pre-aggregating data is easily the most important thing, as it reduces the amount of data that needs to be read by many orders of magnitude.
Avoid short writing transactions, subtransactions and savepoints. This includes exception handling in PL/pgSQL. These burn through the available "transaction ID" space quickly, and cause expensive "wraparound" vacuums that need to rewrite whole tables.
I found that partitioning tables such that each partition individually can fit in the kernel's cache is good for maintenance and migrations, if you ever need to do any. This means you can recreate all indexes on a partition with just 1 seq scan from disk, instead of one scan for each index.
Like Chris already mentioned, be generous with work_mem and maintenance_work_mem; if your workload doesn't fit in RAM then keeping more temporary data in memory saves I/O and CPU time due to smarter query plans (most importantly HashAggregate).
If you need to do huge sorts, it can help to buy a dedicated SSD for storing the temporary files.
From a memory management perspective one of your largest differences is that you can often hope to keep the working OLTP set in memory while this is not the case with OLAP environments. Additionally very often your joined sets are bigger. This means higher work_mem settings can be very helpful and to the extent tables are denormalized this means one can push work_mem a bit higher than it might be otherwise. I am not sure my advice on shared_buffers would change (I prefer to start low and increase, testing performance at each step) but work_mem certainly would need to increase if you are doing reporting on sets of any size.

A question about indexes regarding to the gain of inserts & updates in database

I’m having a question about the fine line between the gain of an index to a table there is growing steadily in size every month and the gain of queries with an index.
The situation is, that I’ve two tables, Table1 and Table2. Each table grows slowly but regularly each month (with about 100 new rows for Table1 and a couple of rows for Table2).
My concrete question is whether to have an index or to drop it. I’ve made some measurement that an covering index on Table2 improve my SELECT queries and some rather much but again, I’ve to consider the pros and cons but having a really hard time to decide.
For Table1 it might not be necessary to have an index because the SELECT queries there is not that common.
I would appreciate any suggestion, tips or just good advice to what is a good solution.
By the way, I’m using IBM DB2 version 9.7 as my Database system
Sincerely
Mestika
Any additional index will make your inserts slower and your queries faster.
To take a smart decision, you will have to measure exactly by how much, with the amount of data that you expect to see. If you have multiple clients accessing the database at the same time, it may make sense to write a small multithreaded application that simulates the maximum load, both for inserts and for queries.
Your results will depend on the nature of your data and on the hardware that you are running. If you want to know the best answer for your usecase, there is no way around testin accurately yourself with your data and your hardware.
Then you will have to ask yourself:
Which query performance do I need?
If the query performance is good enough without the index anyway, easy: Don't add the index!
Which insert performance do I need?
Can it drop below the needed limit with the additional index? If not, easy: Add the index!
If you discover that you absolutely need the index for query performance and you can't get the required insert performance with the index, you may need to buy better hardware. Solid state discs can do wonders for database servers and they are getting affordable.
If your system is running fine for everyone anyway, worry less, let it run as is.