I'm running into issues because OrientDB's EXPORT and IMPORT don't respect the integrity of RIDs (it is baffling to me that something meant to preserve data doesn't retain ids, but that's a topic for another time). How to export/import OrientDB dumps and retain IDs
According to the documentation, there is something called ___exportImportRIDMap that is a mapping from the old RIDs to the new ones. How does one actually use this in practice? Does anyone have any examples?
Related
We would like to be able to read state inside a command use case.
We could get the state from event store for the specific aggregate, but what about querying aggregates by field(not id) or performing more complicated queries, that are not fitted for the event store?
The approach we were thinking was to use our read model for those cases as well and not only for query use cases.
This might be inconsistent, so a solution could be to have the latest version of the aggregate stored in both write/read models, in order to be able to tell if the state is correct or stale.
Does this make sense and if yes, if we need to get state by Id should we use event store or the read model?
If you want the absolute latest state of an event-sourced aggregate, you're going to have to read the latest snapshot (assuming that you are snapshotting) and then replay events since that snapshot from the event store. You can be aggressive about snapshotting (conceivably even saving a snapshot after every command), but you're giving away some write performance to make the read faster.
Updating the read model directly is conceivably possible, though that level of coupling is something that should be considered very carefully. Note also that you will very likely need some sort of two-phase commit to ensure that the read model is only updated when the write model is updated and vice versa. I strongly suggest considering why you're using CQRS/ES in this project, because you are quite possibly undermining that reason by doing this sort of thing.
In general, if you need a query for processing a particular command, it's likely that query will generally be the same, i.e. you don't need free-form query support. In that case, you can often have a read model that's tuned for exactly that query and which only cares about events which could affect that query: often a fairly small subset of the events. The finer-grained the read model, the easier it is to keep in sync (if it ignores 99% of events, for instance, it can't really fall that far behind).
Needing to make complex queries as part of command processing could also be a sign that your aggregate boundaries aren't right and could do with a re-examination.
Does this make sense
Maybe. Let's start with
This might be inconsistent
Yup, they might be. So what?
We typically respond to a query by sending an unlocked copy of the answer. In other words, it's possible that the actual information in the write model will change after this response is dispatched but before the response arrives at its destination. The client will be looking at a copy of the answer taken from the past.
So we might reasonably ask how much better it is to get information no more than one minute old compared to information no more than five minutes old. If the difference in value is pennies, then you should probably deploy the five minute version. If the difference is millions of dollars, then you're in a good position to negotiate a real budget to solve the problem.
For processing a command in our own write model, that kind of inconsistency isn't usually acceptable or wise. But neither of the two common answers require keeping the read and write models synchronized. The most common answer is to just work with the write model alone. The less common answer is to grab a snapshot out of a cache, and then apply any additional events to it to bring it up to date. The latter approach is "just" a performance optimization (first rule: don't.)
The variation that trips everyone up is trying to process a command somewhere else, enforcing a consistency rule on our data here. Once again, you need a really clear picture of how valuable the consistency is to the business. If it's really important, that may be a signal that the information in question shouldn't be split into two different piles - you may be working with the wrong underlying data model.
Possibly useful references
Pat Helland Data on the Outside Versus Data on the Inside
Udi Dahan Race Conditions Don't Exist
So, in DynamoDB the reccomended approach to a many-to-many relationship is using Adjacency List Pattern.
Now, it works great for when you need to read the Data because you can easily read several items with one request.
But what if I need to update/delete the Data? These operations happen on a specific item instead of a query result.
So if I have thousands of replicated data to facilitate a GET operation, how am I going to update all of these replicas?
The easiest way I can think of is instead of duplicating the data, I only store an immutable ID, but that's pretty much emulating a relational database and will take at least 2 requests.
Simple answer: You just update the duplicated items :) AFAIK redundant data is preferred in NoSQL databases and there are no shortcuts to updating data.
This of course works best when read/write ratio of the data is heavily on the read side. And in most everyday apps that is the case (my gut feeling that could be wrong), so updates to data are rare compared to queries.
DynamoDB has a couple of utils that might be applicable here. Both have their shortcomings though
BatchWriteItem allows to put or delete multiple items in one or more tables. Unfortunately, it does not allow updates, so probably not applicable to your case. The number of operations is also limited to 25.
TransactWriteItems allows to perform an atomic operation that groups up to 10 action requests in one or more tables. Again the number of operations is limited for your case
My understanding is that both of these should be used with caution and consideration, since they might cause performance bottlenecks for example. The simple way of updating each item separately is usually just fine. And since the data is redundant, you can use async operations to make multiple updates in parallel.
My company has two locations: Head Quarters (HQ) and a sub location (SL).Both locations (HQ and SL) has separate PostgreSQL(9.6) databases with the same schema and they will do transactions (insert/delete/update) in off-line. When both comes on line, both databases need to be synchronized (ie, come in the same state). Is it possible?
Is there a some configuration setting you can just turn on? No.
Is it possible at all? Yes, you could write something to do it but this is going to be vary schema-specific and probably isn't actually something you're going to want:
The problem with what you're proposing is unique columns (including primary keys and serial columns): how do you resolve conflicts between HQ and SL?
Maybe HQ always wins. If there's a conflict between the two we just delete the row in SL.
Maybe there is some part of all unique columns to designate which it came from.
Maybe you just have it flagged and an operator manually deals with it. (this is how Lotus Domino handles the situation)
All of these are going to require information about how the data is being used, that postgres just doesn't have. Because of that managing this resync will likely have to be done by your own custom code, and you'd have to be extremely careful about how it manages it.
EDIT: As Keven points out below, trying to do this is probably a recipe for disaster, if you at all care about consistency.
RocksDB newbie here.
In runtime, I only use RocksDB to read the data. Sometimes, I need to merge session-specific records from other sources.
I don't want them to be merged into the main database.
I want them to exist only during the session lifetime for that specific session.
I can, of course, use a regular std::vector or something and merge the RocksDB and the other sources, but that would duplicate the data.
I see a bunch of concepts like memtable and merge, which sound like they might be used or exploited. For example, if I can tell memtable to never commit, and just abandon the changes, that should work. Is it doable?
The easiest way is probably to separate them into different column families, and just dropping the one you don't want to persist, when you shut down your application. If you need per-entry lifetime, you will probably have to consider something custom like a RAII-holder class, if you are using c++, which inserts on construction and deletes on destruction. I would still go with a separate column family to have the data cleanly separated, in case of a crash-failure.
From what I can make out NoSQL databases might be a good option for high intensity data read applications, but are a less good fit if you need to do also do a lot data updates and transactionality is very important to you (what with there being no ACID compliance). Right? Too simplistic maybe.
But anyway, supposing I'm partly right at least I'm now concerned about how NoSQL databases maintain a "read consistent" view of the data that you're either reading or writing. Or do they? And if they don't, isn't that a really big problem?
I mean, if the data that you're reading (or updating) is changing as you read it then you're potentially going to get an inconsistent/dirty result set. Coming from an Oracle rdbms background, where all this is just handled for you, I find it confusing how the lack of read consistency is anything but a big problem. Could well be though that I'm missing some key point about all this. Can someone set me straight?
I am a developer on the Oracle NoSQL Database and will answer your question relative to that particular NoSQL system.
The Oracle NoSQL Database API allows the programmer to specify -- with each API call -- the level of read consistency. The four possible values, ranging from strictest to loosest, are Absolute, Time, Version, and None. Absolute says to always read from the replication master so that the most current value is returned. "Time" says that the system can return a value from any replica that is at least within a certain time delta of the master (e.g. read the value from any replica that is within 2 seconds of the master). Every read and write call to the system returns a "version handle". This version handle may be passed into any read call when Consistency.Version is specified and it tells the system to read from any replica which is at least as up to date as that version. This is useful for Read Modify Write (aka CAS) scenarios. The last value, Consistency.None says that any replica can be used (i.e. there is no consistency guaranteed).
I hope this is helpful.
Charles Lamb
A NoSQL database can be read-consistent, although it's generally not a big problem if it's not strictly so, check out the CAP theorem. There's been quite a lot of research done in this area, I recommend reading Amazon's Dynamo paper for a quick view of some of the problems and solutions faced by distributed systems like NoSQL databases.
MongoDB allows the application to select the desired level of read consistency using "write concern". This concept allows your application to block until a certain condition is met for a given write.
By way of example, you can consider any write successful so long as the operation is communicated to a master server. Alternatively, you can block until a write has been propagated to a majority of nodes in your replica set. In this way, you can mix performance/consistency to taste.
It depends on the NoSQL database you are using as each implements a different strategy. You can read, for example, Riak's explanation of their "eventual consistency" model or Lars Hofhansel's writeup on ACID in HBase