I use gorm for ORM with PostgresSQL and I noticed this in it's documentation
GORM perform write (create/update/delete) operations run inside a transaction to ensure data consistency, you can disable it during initialization if it is not required, you will gain about 30%+ performance improvement after that
https://gorm.io/docs/transactions.html
Consistency is important for my use case but I'm wondering if this is really necessary, and worth the performance hit
There is a saying: you can make it arbitrarily fast, if you don't have to do it correctly. What good are broken data?
But in this case, I have to doubt the claim. In an ACID compliant relational database, you always pay the price for transactional processing.
By default, every statement in PostgreSQL runs in its own transaction. So if you start an explicit transaction that spans several data modifying statements, you actually gain performance, since you don't have to pay the price for a commit as often.
The only consideration is the network latency you incur four times when sending BEGIN; and COMMIT;. But if you have high network latency, you can say goodbye to OLTP performance anyway.
Related
I must be wrong about this. I'm considering using mongodb in my project, but I read this:
http://docs.mongodb.org/manual/faq/concurrency/#what-type-of-locking-does-mongodb-use
It says that mongodb uses a database level reader-writer lock.
MySQL InnoDB uses row-level locking. Well, doesn't it means, theoretically, mongodb is 2 levels slower than MySQL for concurrent access?
If you look up readers-writer lock you will find that it is a completely different type of animal than database lock that MySQL is referring to when you use the phrase "row level locking".
Readers-writer lock protects shared memory access, and therefore is extremely short lived (on the order of microseconds). Since in MongoDB operations are only atomic on the document level, these locks (in traditional databases they are sometimes referred to as latches and are used to guard index access) are only held as long as a single document takes to update in memory.
Regular "database lock" will usually exists until the transaction that's in progress has either been committed or rolled back. Because RDBMS transactions can span multiple operations across many tables, these locks are normally much longer lived and therefore must be much more granular to allow other work to happen concurrency.
doesn't it means, theoretically, mongodb is 2 levels slower than MySQL for concurrent access?
No, it really does not, and depending on your exact workload could be a lot faster or a little faster or slower - it all depends on the types of operations you are doing, your available physical resources, the structure of your data, as well as the needs of your application.
Applications that write a lot of data to the database in MongoDB tend to be limited primarily by the available disk IO throughput rate. Only when available disk bandwidth exceeds the amount of writes done by the application to the database would you see concurrency become a factor with MongoDB. With relational databases, because of longer lifetimes of locks, concurrency can become a factor much earlier even with relatively small amount of total data being written.
I build a tool for data extraction and transformation. Typical use case - transactionally processing lots of data.
Numbers are - about 10sec - 5min duration, 200-10000 row updated (long duration caused not by the database itself but by outside services that used during transaction).
There are two types of agents that access database - multiple read agents, and only one write agent (so, there are never multiple concurrent write).
During the transaction:
Read agents should be able to read database and see it in the current state.
Write agent should be able to read database (it does both - read and write during transaction) and see it in the new (not yet committed) state.
Is PostgreSQL a good choice for that type of load? I know it uses MVCC - so it should be ok in general, but is it ok to use long and big transactions extensively?
What other open-source transactional databases may be a good choice (I am not limited to SQL)?
P.S.
I do not know if the sharding may affect the performance. The database will be sharded. For every shard there will be multiple readers and only one writer, but multiple different shards can be written to at the same time.
I know that it's better not to use outside services during transaction, but in that case - it's the goal. The database used as a reliable and consistent index for some heavy, huge, slow and eventually-consistent data processing tool.
Huge disclaimer: as always, only real life test can tell you the truth.
But, I think PostgreSQL will not let you down, if you use most recent version (at least 9.1, better 9.2) and tune it properly.
I have somewhat similar load in my server, but with slightly worse R/W ratio: about 10:1. Transactions range from few milliseconds up to 1 hour (and sometimes even more), and one transaction can insert or update up to 100k rows. Total number of concurrent writers with long transactions can reach 10 and more.
So far so good - I don't really have any serious issues, performance is great (certainly not worse than I expected).
What really helps is that my hot working data set almost fits into available memory.
So, give it a try, it should work great for your load.
Have a look at this link. Maximum transaction size in PostgreSQL
Basically there can be some technical limits on the software side to how large your transaction can be.
I have found plenty of online and print guides on how to tune and optimize performance for Postgres for OLTP applications, but I haven't found anything of the sort specific to Data Warehousing applications. Since there are so many differences in the types of workload, I'm sure there has to be some differences in how the databases are managed and tuned.
Some of my own:
I have found from the DDL side that I use indexes a lot more liberally, since I usually only worry about inserts once a day and can do batch inserts with index rebuilds.
I will typically use integer surrogate keys to data that typically has more than one natural key for faster joins
I will usually define and maintain a very comprehensive date table that has prebuilt date manipulations (fiscal date as opposed to calendar date, fiscal year-month, starting day of the week, etc) and use it liberally as opposed to using functions in select statements and where statements. This usually helps during CPU-bound aggregate queries.
I was hoping that I would find some information on memory management and other database settings, but I would be happy to hear any useful best practices specific to Postgres-based Data Warehousing.
My experience (admittedly on a pretty small scale when it comes to data warehouses):
Like you mention, pre-aggregating data is easily the most important thing, as it reduces the amount of data that needs to be read by many orders of magnitude.
Avoid short writing transactions, subtransactions and savepoints. This includes exception handling in PL/pgSQL. These burn through the available "transaction ID" space quickly, and cause expensive "wraparound" vacuums that need to rewrite whole tables.
I found that partitioning tables such that each partition individually can fit in the kernel's cache is good for maintenance and migrations, if you ever need to do any. This means you can recreate all indexes on a partition with just 1 seq scan from disk, instead of one scan for each index.
Like Chris already mentioned, be generous with work_mem and maintenance_work_mem; if your workload doesn't fit in RAM then keeping more temporary data in memory saves I/O and CPU time due to smarter query plans (most importantly HashAggregate).
If you need to do huge sorts, it can help to buy a dedicated SSD for storing the temporary files.
From a memory management perspective one of your largest differences is that you can often hope to keep the working OLTP set in memory while this is not the case with OLAP environments. Additionally very often your joined sets are bigger. This means higher work_mem settings can be very helpful and to the extent tables are denormalized this means one can push work_mem a bit higher than it might be otherwise. I am not sure my advice on shared_buffers would change (I prefer to start low and increase, testing performance at each step) but work_mem certainly would need to increase if you are doing reporting on sets of any size.
I am developing a JAVA based web application. The primary aim is to have inventory for products being sold on multiple websites called channels. We will act as manager for all these channels.
What we need is:
Queues to manage inventory updates for each channel.
Inventory table which has a correct snapshot of allocation on each channel.
Keeping Session Ids and other fast access data in a cache.
Providing a facebook like dashboard(XMPP) to keep the seller updated asap.
The solutions i am looking at are postgres(our db till now in a synchronous replication mode), NoSQL solutions like Cassandra, Redis, CouchDB and MongoDB.
My constraints are:
Inventory updates cannot be lost.
Job Queues should be executed in order and preferably never lost.
Easy/Fast development and future maintenance.
I am open to any suggestions. thanks in advance.
Queues to manage inventory updates for each channel.
This is not necessarily a database issue. You might be better off looking at a messaging system(e.g. RabbitMQ)
Inventory table which has a correct snapshot of allocation on each channel.
Keeping Session Ids and other fast access data in a cache.
session data should probably be put in a separate database more suitable for the task(e.g. memcached, redis, etc)
There is no one-size-fits-all DB
Providing a facebook like dashboard(XMPP) to keep the seller updated asap.
My constraints are:
1. Inventory updates cannot be lost.
There are 3 ways to answer this question:
This feature must be provided by your application. The database can guarantee that a bad record is rejected and rolled back, but not guarantee that every query will get entered.
The app will have to be smart enough to recognize when an error happens and try again.
some DBs store records in memory and then flush memory to disk peridocally, this could lead to data loss in the case of a power failure. (e.g Mongo works this way by default unless you enable journaling. CouchDB always appends to the records(even a delete is a flag appended to the record so data loss is extremely difficult))
Some DBs are designed to be extremely reliable, even if an earthquake, hurricane or other natural disaster strikes, they remain durable. these include Cassandra, Hbase, Riak, Hadoop, etc
Which type of durability are your referring to?
Job Queues should be executed in order and preferably never lost.
Most noSQL solutions prefer to run in parallel. so you have two options here.
1. use a DB that locks the entire table for every query(slower)
2. build your app to be smarter or evented(client side sequential queuing)
Easy/Fast development and future maintenance.
generally, you will find that SQL is faster to develop at first, but changes can be harder to implement
noSQL may require a little more planning, but is easier to do ad hoc queries or schema changes.
The questions you probably need to ask yourself are more like:
"Will I need to have intense queries or deep analysis that a Map/Reduce is better suited to?"
"will I need to my change my schema frequently?
"is my data highly relational? in what way?"
"does the vendor behind my chosen DB have enough experience to help me when I need it?"
"will I need special feature such as GeoSpatial indexing, full text search, etc?"
"how close to realtime will I need my data? will it hurt if I don't see the latest records show up in my queries until 1sec later? what level of latency is acceptable?"
"what do I really need in terms of fail-over"
"how big is my data? will it fit in memory? will it fit on one computer? is each individual record large or small?
"how often will my data change? is this an archive?"
If you are going to have multiple customers(channels?) each with their own inventory schemas, a document based DB might have it's advantages. I remember one time I looked at an ecommerce system with inventory and it had almost 235 tables!
Then again, if you have certain relational data, a SQL solution can really have some advantages too.
I can certainly see how I could build a solution using mongo, couch, riak or orientdb with the given constraints. But as for which is the best? I would try talking directly DB vendors, and maybe watch the nosql tapes
Addressing your constraints:
Most NoSQL solutions give you a configurable tradeoff of consistency vs. performance. In MongoDB, for instance, you can decide how durable a write should be. If you want to, you can force the write to be fsync'ed on all your replica set servers. On the other extreme, you can choose to send the command and don't even wait for the server's response.
Executing job queues in order seems to be an application code issue. I'd say a timestamp in the db and an order by type of query should do for most applications. If you have multiple application servers and your queues need to be perfect, you'd have to use a truly distributed algorithm that provides ordering, but that is not a typical requirement, and it's very tricky indeed.
We've been using MongoDB for some time now, and I'm convinced this gives your app development speed a real boost. There's no big difference in maintenance, maintaining data is a pain either way. Not having a schema gives you added flexibility (lazy migrations), but it's more elaborate and requires some care.
In summary, I'd say you can do it both ways. The NoSQL is more code driven, and transactions and relational integrity are mostly managed by your code. If you're uncomfortable with that, go for a relational DB.
However, if you're data grows huge, you'll have to code some of this logic manually because you probably wouldn't want to do real-time joins on a 10B row database. Still, you can implement that with SQL as well.
A good way to find the boundary for different databases is to consider what you can cache. Data that can be cached and reconstructed at any time are a great way to start introducing a new layer, because there's no big risks there. Also, cached data usually doesn't keep any relations so you're not sacrificing any consistency here.
NoSQL is not correct for this application.
I mean, you can use it sure, but you will end up re-implementing a lot of what SQL offers for you. For example I see a lot of relations there. You also want ACID (although some NoSQL solutions do offer that).
There is no reason you can't use both - keep relational data in relational databases, and non-relational data in key/value stores.
I'd like to know when you should consider using multiple table in your query store.
For example, consider the problem where a product has it's description changed. This change could potentially have a massive impact on the synchronisation of the read only query store if you had many aggregates that included the product description.
At which point should you consider a slight normalization of the data to avoid lengthy synchronisation issues? Is this a no-no or an acceptable compromise?
Thanks,
CQRS is not about using table-per-view, rather table-per-view is an aspect of a system that CQRS makes easier.
It's up to you and depends on your specific context and needs. I would look at it this way, what is the cost of the eventual consistency of that query vs. the need for high query performance. You may want to consider the following two characteristics of your system:
1) The avg. consistency of that command, i.e., how long it takes to update all of the read models affected by the command (also consider whether an optimized stored-proc for the change would outperform say using an ORM or other abstraction to update your database in this way).
My guess is unless you are talking millions, upon millions of records the consistency here is sufficient to meet your requirements and user expectations for consistency, maybe a few seconds.
2) The importance of query performance. How many queries are you getting per second? Can you handle doing a SQL join every time?
In most practical scenarios the optimization of either of these things is moot. You can probably do the update, regardless of records, using a good SP in seconds which is more than enough consistency for a UI refresh (keep in mind the UI that issued the command can be consistent as soon as they know the command succeeded).
And you usually don't need so much query scaling in a system that a single join will hurt you. What you may not want is the added internal complexity of performing these joins in your code and stored procs.
As with all things in CQRS, you don't need to use and optimize every aspect of it from day one. You can optimize these things incrementally. Use joins today, and fully denormalize tomorrow, or vice-versa.