I know mongo docs provide a way to simulate auto_increment.
http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
But it is not concurrency-proof as guaranteed by say MySQL.
Consider the sequence of events:
client 1 obtains an index of 1
client 2 obtains an index of 2
client 2 saves doc with id=2
client 1 saves doc with id=1
In this case, it is possible to save a doc with id less than the current max that is already saved. For MySql, this can never happen since auto increment id is assigned by the server.
How do I prevent this? One way is to do optimistic looping at each client, but for many clients, this will result in heavy contention. Any other better way?
The use case for this is to ensure id is "forward-only". This is important for say a chat room where many messages are posted, and messages are paginated, I do not want new messages to be inserted in a previous page.
But it is not concurrency-proof as guaranteed by say MySQL.
That depends on the definition of concurrency-proof, but let's see
In this case, it is possible to save a doc with id less than the current max that is already saved.
That is correct, but it depends on the definition of simultaneity and monotonicity. Let's say your code snapshots the state of some other part of the system, then fetches the monotonic key, then performs an insert that may take a while. In that case, this apparently non-monotonic insert might actually be 'more monotonic' in the sense that index 2 was indeed captured at a later time, possibly reflecting a more recent state. In other words: does the time it took to insert really matter?
For MySql, this can never happen since auto increment id is assigned by the server.
That sounds like folklore. Most relational dbs offer fine-grained control over these features, since strict guarantees severely impact concurrency.
MySQL does neither guarantee that there are no gaps, nor that a transaction with a high AUTO_INCREMENT id isn't visible to other readers before a transaction that acquired a lower AUTO_INCREMENT value was committed, unless you keep a table-level lock, which severely impacts concurrency.
For gaplessness, consider a transaction rollback of the first of two concurrent inserts. Does the second insert now get a new id assigned while it's being committed? No - from the InnoDB documentation:
You may see gaps in the sequence of values assigned to the AUTO_INCREMENT column if you roll back transactions that have generated numbers using the counter. (see end of 14.6.5.5.1, "Traditional InnoDB Auto-Increment Locking")
and
In all lock modes (0, 1, and 2), if a transaction that generated auto-increment values rolls back, those auto-increment values are “lost”
also, you're completely ignoring the problem of replication where sequences lead to even more trouble:
Thus, table-level locks held until the end of a statement make INSERT statements using auto-increment safe for use with statement-based replication. However, those locks limit concurrency and scalability when multiple transactions are executing insert statements at the same time. (see 14.6.5.5.2 "Configurable InnoDB Auto-Increment Locking")
The sheer length of the documentation of the InnoDB behavior is a reminder of the true complexity of making apparently simple guarantees in a concurrent system. Yes, monotonicity of inserts is possible with table-level locks, but hardly desirable. If you take a distributed view of the system, things get worse, because we can't even be sure of the counter value in partition mode...
Related
I want to use a table in Postgres database as a storage for input documents (there will be billions of them).
Documents are being continuously added (using "UPSERT" logic to avoid duplicates), and rarely are removed from the table.
There will be multiple worker apps that should continuously read data from this table, from the first inserted row to the latest, and then poll new rows as they being inserted, reading each row exactly once.
Also, when worker's processing algorithm changes, all the data should be reread from the first row. Each app should be able to maintain its own row processing progress, independent of other apps.
I'm looking for a way to track last processed row, to be able to pause and continue polling at any moment.
I can think of these options:
Using an autoincrement field
And then store the autoincrement field value of the last processed row somewhere, to use it in a next query like this:
SELECT * FROM document WHERE id > :last_processed_id LIMIT 100;
But after some research I found that in a concurrent environment, it is possible that rows with lower autoincrement values will become visible to clients LATER than rows with higher values, so some rows could be skipped.
Using a timestamp field
The problem with this option is timestamps are not unique and could overlap during high insertion rate, what, once again, leads to skipped rows. Also, adjusting system time (manually or by NTP) may lead to unpredicted results.
Add a process completion flag to each row
This is the only actually reliable way to do this I could think of, but there are drawbacks to it, including the need to update each row after it was processed and extra storage needed to store the completion flag field for each app, and running a new app may require DB schema change. This is the last resort for me, I'd like to avoid it if there are more elegant ways to do this.
I know, the task definition screams I should use Kafka for this, but the problem with it is it doesn't allow to delete single messages from a topic, and I need this functionality. Keeping an external list of Kafka records that should be skipped during processing feels very clumsy and inefficient to me. Also, a real-time deduplication with Kafka would also require some external storage.
I'd like to know if there are other, more efficient approaches to this problem using the Postgres DB.
I ended up saving the transaction id for each row and then selecting records that have txid value lower than the transaction with smallest id to the moment like this:
SELECT * FROM document
WHERE ((txid = :last_processed_txid AND id > :last_processed_id) OR txid > :last_processed_txid)
AND txid < pg_snapshot_xmin(pg_current_snapshot())
ORDER BY txid, id
LIMIT 100
This way, even if Transaction #2, that was started after Transaction #1, completes faster than the first one, the rows it written won't be read by a consumer until the Transaction #1 finishes.
Postgres docs state that
xid8 values increase strictly monotonically and cannot be reused in the lifetime of a database cluster
so it should fit my case.
This solution is not that space-efficient, because an extra 8-byte txid field must be saved with each row, and an index for the txid field should be created, but the main benefits over other methods here are:
DB schema remains the same in case of adding new consumers
No updates needed to mark row as processed, a consumer only should keep id and txid values of the last processed row
System clock drift or adjustment won't lead to rows being skipped
Having the txid for each row helps to query data in insertion order in cases when multiple producers insert rows with id, generated using preallocated pool (for example, Producer 1 in the moment inserts rows with ids in 1..100, Producer 2 - 101..200 and so on)
Heroku did automatic failover to a follower DB.
I noticed a gap in some autoincremented primary key columns after that failover. E.g. before the failover, the latest record had ID 117019 and the next record got ID 117052. No records were deleted.
It's not an issue as such, I was just curious what's going on here, and if I may be correct in attributing it to the failover, or if I should look for other explanations.
These gaps are probably the result of failed transactions that were rolled back.
Sequences are not transactional, that is, the sequence won't return the same value again after a rollback.
This is intentional, see the documentation:
To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends up not using the value. [...] Such cases will leave unused “holes” in the sequence of assigned values. Thus, PostgreSQL sequence objects cannot be used to obtain “gapless” sequences.
The PostgreSQL serial or identity columns are usual sequence. So, your the sequence can take several values, but some of them are not used.
How does one increment a field in a database such that even if if a thousands connections to the database try to increment it at once -- it will always be 1000 at the end (if started from zero).
I mean, so that no two connections increment the same number number resulting in a lost increment.
How do you synchronize and make sure the data is consistent? is there a must for a stores procedure for this? database "locking"? how is that done?
What you're looking for is a Postgres SEQUENCE.
You call nextval('sequence_name') to get the next number in the sequence.
According to the docs,
sequences are designed with concurrency in mind:
To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used, even if the transaction that did the nextval later aborts. This means that aborted transactions might leave unused "holes" in the sequence of assigned values.
EDIT:
If you're looking for a gapless sequence, in 2006 someone posted a solution to the PostgreSQL mailing list: http://www.postgresql.org/message-id/44E376F6.7010802#seaworthysys.com. It appears there's also a lengthy discussion on locking, etc.
The gapless-sequence question was also asked on SO even though there was never an accepted answer:
postgresql generate sequence with no gap
I have a table whose primary keys are numbers are not sequentially.
By company policy is to register the new rows with ID lower value available. I.E.
table.ID = [11,13,14,16,17]
min(table.ID) = 12
I have an algorithm that gives me the lowest available. I want to know how to prevent this ID is use by another person before making insertion.
Would it be possible to do by DB? or would it be programming language?
Thanks.
The company policy is extremely short-sighted. Unless the company's goal is to build applications that do not scale and the company is unconcerned with performance.
If you really wanted to do this, you'd need to serialize all your transactions that touch this table-- essentially turning your nice, powerful server into a single-threaded single-user low-end machine. There are any number of ways to do this. The simplest (though not simple) method would be to do a SELECT ... FOR UPDATE on the row with the largest key less than the new key you want to insert (11 in this case). Once you acquired the lock, you would need to re-confirm that 12 is vacant. If it is, you could then insert the row with an id of 12. Otherwise, you'd need to restart the process looking for the new key and trying to lock the row with an id one less than that key. When your transaction commits, the lock would be released and the next session that was blocked waiting for a lock would be able to process. This assumes that you can control every process that tries to insert data into this table and that they would all implement exactly the same logic. It will lock up the system if you ever allow transactions to span waits for human input because humans will inevitably go to lunch with rows locked. And all that serialization will radically reduce the scalability of your application.
I would strongly encourage you to push back against the ridiculous "requirement" rather than implementing something this hideous.
I want to insert some data in SQLite table with one column for keeping string values and other column for keeping sequence number.
SQLite documentation says that autoincrement does not guarantees the sequential insertion.
And i do not want to keep track of previously inserted sequence number.
Is there any way for storing data sequentially, without keeping track of previously inserted row?
The short answer is that you're right that the autoincrement documentation makes it clear that INTEGER PRIMARY KEY AUTOINCREMENT will be constantly increasing, though as you point out you using, not necessarily sequentially so. So you obviously have to either modify your code so it's not contingent on sequential values (which is probably the right course of action), or you have to maintain your own sequential identifier yourself. I'm sure that's not the answer you're looking for, but I think it's the practical reality of the situation.
Short answer: Stop worrying about gaps in AUTOINCREMENT id sequences. They are inevitable when dealing with transactional databases.
Long answer:
SQLite cannot guarantee that AUTOINCREMENT will always increase by one, and reason for this is transactions.
Say, you have 2 database connections that started 2 parallel transactions almost at the same time. First one acquired some AUTOINCREMENT id and it becomes previously used value +1. One tick later, second transaction acquired next id, which is now +2. Now imagine that first transaction rolls back for some reason (encounters some error, code decided to abort it, program crashed, etc.). After that, second transaction will commit id +2, creating a gap in id numbering.
Now, what if number of such parallel transactions is higher than 2? You cannot predict, and you also cannot tell currently running transactions to reuse ids that were not used for any reason.
If you insert data sequentially into your SQLite database, they will be stored sequentially.
From the Documentation: the automatically generated ROWIDs are guaranteed to be monotonically increasing.
So, for example, if you wanted to have a table for Person, then you could use the following command to create table with autoincrement.
CREATE table PERSON (personID integer PRIMARY KEY AUTOINCREMENT, personName string)
Link: http://www.sqlite.org/autoinc.html