Is it possible to send a bulk gets call to memcached using spymemcached? (Note the gets to get CAS ID)
There is not a way to send bulk gets in spymemcached. In the future though all operations will most likely be sent as bulk operations under the hood. This means that you will send everything as a single operation and they will all get packaged up in spymemcached as a single bulk operation. This functionality might be available in version 3.0.
Related
I found discussion about the topic here https://jira.mongodb.org/browse/RUST-271
The latest answer:
Hello! We don't implement the bulk write API and have no plans to add
it. From dealing with it in other drivers, we've found that the
semantics tend to be non-obvious, especially with regards to error
handling, and it doesn't provide a whole lot of technical benefit
given that it still sends different write types in separate commands
to the server (i.e. the same way as if each of insert_many,
update_many, delete_many, etc. were called separately).
But there is something still not clear for me.
Why that answer mention *_many (update_many, insert_many etc) type operations if bulk write is set of *_one (update_one, insert_one etc)?
Ok, insert_many uses bulk write under the hood. But bulk write and update many are different. Because first one uses a set of update_one operations which have different filters and update bodies for each one. And update_many has one filter for many updates.
So, update_many cannot replace bulk write. Am I right? If so, then why MongoDB rust driver does not support bulk write? If I am not right can you please explain why? Maybe I do not understand some details.
It seems if I use update_one a lot of time I get bad performance compared to bulk write. Because in case of bulk write I will connect once to MongoDB compared to update_one where I need one connection per update_one.
I'm wondering if transactions (https://firebase.google.com/docs/firestore/manage-data/transactions) are viable tools to use in something like a ticketing system where users maybe be attempting to read/write to the same collection/document and whoever made the request first will be handled first and second will be handled second etc.
If not what would be a good structure for such a need with firestore?
Transactions just guarantee atomic consistent update among the documents involved in the transaction. It doesn't guarantee the order in which those transactions complete, as the transaction handler might get retried in the face of contention.
Since you tagged this question with google-cloud-functions (but didn't mention it in your question), it sounds like you might be considering writing a database trigger to handle incoming writes. Cloud Functions triggers also do not guarantee any ordering when under load.
Ordering of any kind at the scale on which Firestore and other Google Cloud products operate is a really difficult problem to solve (please read that link to get a sense of that). There is not a simple database structure that will impose an order where changes are made. I suggest you think carefully about your need for ordering, and come up with a different solution.
The best indication of order you can get is probably by adding a server timestamp to individual documents, but you will still have to figure out how to process them. The easiest thing might be to have a backend periodically query the collection, ordered by that timestamp, and process things in that order, in batch.
My application is on METEOR#1.6.0.1 and I am using reywood:publish-composite , matb33:collection-hooks for db relations.
I need to insert a list of 400 people into collection from excel file, for it currently i am inserting from client using Meteor method inside loop but when i see on galaxy during this CPU usage is very high 70-80% or some time 100%.
Once all data inserted, i need to send a mail and update the record so i am sending mail and update using Meteor method call one by one that again making CPU 70-80%.
How i can do above task in correct and efficient way. Please help.
Thanks.
I suspect that you are not using oplog tailing and you are trying to insert when some other part of your app has subscriptions to publications open. Without this meteor polls the collections and generates lots of slow queries at each document insert.
You can enable it by passing an url to meteor at startup. See https://docs.meteor.com/environment-variables.html#MONGO-OPLOG-URL for more info.
Having oplog tailing eases the strain on the server and should reduce the high cpu usage to a manageable level.
If you are still having issues then you may have to set up some tracing e.g. monti-apm https://docs.montiapm.com/introduction
I would like to implement a click counter using MongoDB (e.g. user clicks a link, count the total clicks).
My intuitive approach is to have an in-memory low priority thread pool that will handle a blocking queue of click messages and persist it to MongoDB in the background asynchronously.
So my question is - does MongoDB's native Java Driver have some async capabilities that do just that?
If it doesn't, is there an alternative Mongo driver that might has benefits over rolling my own async code?
Well, not really async but if you use a WriteConcern of NONE it is sort of async in that you only get the data into the socket's buffer and the insert returns. The down side is that you won't know if the insert worked or not. In the face of a failure you could silently drop a lot of clicks.
There is an Asynchronous Java Driver that allows you to use futures or callbacks to get the results of the insert. Going that approach there should be no need to roll your own queue or have a background thread (the driver has its own receive threads).
HTH - Rob.
P.S. Full disclosure - I work on the Asynchronous Java Driver.
Since MongoDB does not support transactions, is there any way to guarantee transaction?
What do you mean by "guarantee transaction"?
There are two conepts in MongoDB that are similar;
Atomic operations
Using safe mode / getlasterror ...
http://www.mongodb.org/display/DOCS/Last+Error+Commands
If you simply need to know if there was an error when you run an update for example you can use the getlasterror command, from the docs ...
getlasterror is primarily useful for
write operations (although it is set
after a command or query too). Write
operations by default do not have a
return code: this saves the client
from waiting for client/server
turnarounds during write operations.
One can always call getLastError if
one wants a return code.
If you're writing data to MongoDB on
multiple connections, then it can
sometimes be important to call
getlasterror on one connection to be
certain that the data has been
committed to the database. For
instance, if you're writing to
connection # 1 and want those writes to
be reflected in reads from connection #2, you can assure this by calling getlasterror after writing to
connection # 1.
Alternatively, you can use atomic operations for cases where you need to increment a value for example (like an upvote, etc.) more about that here:
http://www.mongodb.org/display/DOCS/Atomic+Operations
As a side note, MySQL's default storage engine doesn't have transaction either! :)
http://dev.mysql.com/doc/refman/5.1/en/myisam-storage-engine.html
MongoDB only supports atomic operations. There is no ways implement transaction in the sense of ACID on top of MongoDB. Such a transaction support must be implemented in the core. But you will never see full transaction support due to the CARP theorem. You can not have speed, durability and consistency at the same time.
I think ti's one of the things you choose to forego when you choose a NoSQL solution.
If transactions are required, perhaps NoSQL is not for you. Time to go back to ACID relational databases.
Unfortunately MongoDB does't support transaction out of the box, but actually you can implement ACID optimistic transactions on top on it. I wrote an example and some explanation on a GitHub page.