I saw the best reply after watching this video(https://www.youtube.com/watch?v=poqTHxtDXwU&feature=emb_title) and there was a comment like this.
"So there is a read charge even clients app cached the same document data" (currently 24 thumbs up)
And in the comment of another comment, Todd Kerpelman wrote this comment.
"Great question! The answer is that yes, you really will fetch those first 20 documents all the time. Note that this is different than when you have a realtime listener set up and a document changes in a query you're currently listening to - in that case, only the changed doc will be sent up. But if you're making a series of separate get calls that just happen to overlap, the database will send up the entire data set each time."
I am confused now. My question is, when you load the next list with startAfter, do you load the lists that have already been loaded again? Will you be paid?
when you load the next list with startAfter, do you load the lists that have already been loaded again?
No, for pagination, each query is completely different than the last. It does not re-fetch all the prior documents again, and you will not be charged for those prior documents. The query uses the document specified in startAfter() to determine exactly where the query should start reading results, and you will be charged for only the documents that are returned by the query.
Related
In my web app, an authenticated user can pick songs from his spotify playlist to play at a party. I want guests (nonauthenticated users) to be able to view the picked songs on a dynamically created react route and vote on their favorite songs on their own device (probably a phone).
I am using a Mongo, Express, React/Redux, Node stack.
Since the guests don't have access to my app's redux store, the only way they can view the authenticated user's picked songs is through a GET request to my app's database. My initial plan was to just store playlist documents, and the users can GET those playlists to make a request to the spotify api. However, they are unauthorized and need an access token. This means that my database has to store every single one of the songs that the authenticated user picked.
My question has to do with design. I don't think it's a good idea for my one document to hold every song because some people might want to pick thousands of songs, and one document won't be able to hold all of the songs. On the other hand, creating a separate document for each song seems a little bit too excessive.
Can anyone help me figure out which option is better, or if there is a different option I haven't thought of that can avoid this problem altogether? Thank you
Assuming that if you would store each song in a separated document, the main disadvantage of this strategy is the space complexity, you'll need more space to store all documents.
But, supposing you'll keep all song documents at the same collection, it gives some advantages, for example: queries and sorts operations will be more flexible and faster. It helps you to save both processing and development time. A similar logic is showed here.
Use just one document to store all songs makes your database operations more complex, what requires more development time and code to organize all retrieved data on the proper way. Another disadvantage is that it isn't a long term scalable strategy, mainly because the limit of a BSON document is 16MB.
At my vision, the design of separated documents for each song is more appropriate and the reasons are:
Space is monetarily cheap.
Save time complexity must be a priority on all points of software development. Database queries usually are the slower operations in a software. So, reduce the cost of time at database operations is a good objective to seek. Storing all documents in one collection instead of in one document will retrieve all data already organized, with no no need to retreat at code.
I am working on an art project which desplays bitcoin transactions graphically.
Thus i need some way of getting an update when a transaction is filed in the blockchain.
Is there any way to accomplish this without copying the whole blockchain, since i do not need any precise information about the transaction?
If you are really want to get all the transactions that are happening, then you have to parse each new block that comes in. You have to look into the RPC Calls to get each individual block.
If you just want to watch certain addresses for transactions you can look into walletnotify of the bitcoind node.
I'm using the Core Reporting API (Reporting API V4).
Is there any way for me to determine the last time the data my query is returning was updated?
I'd like to be able to indicate whether the data being displayed was last updated several hours ago versus several minutes ago.
The API does respond with isDataGolden which tells you if the data will change again, if your website is small the data processing latency could be almost nothing.
From your question it sounds like you are more interested in not just if the data is stale but how stale. You could request the the ga:hour and ga:minute to find out when the last processed hit was recorded.
Note there is also the Realtime API which gives you a read of what is happening instantaneously.
I'm using Cloudant and I'm struggling to pull/replicate 600 documents from server to my iPhone. First, it's pretty slow because it has to go one-document-at-a-time, and Second Cloudant was giving me "timeouts" after the 100th-or-so REST request. (I have a ticket with Cloudant for this one, as it's unacceptable!)
I was wondering if anyone has found a way / hack to "bulk" replicate when pulling. I was thinking, perhaps it's possible to "zip up" all of the changes, send them in one file, and fast-forward the iPhone database to the last-change seq.
Any helps is great -- thanks!
Can you not hit _all_docs?include_docs=true to get everything in one shot? http://wiki.apache.org/couchdb/HTTP_Document_API#all_docs
I don't know couchcoccoa but it looks like the API supports this: http://couchbaselabs.github.com/CouchCocoa/docs/interfaceCouchDatabase.html#a49d0904f438587b988860891e8049885
Actually, why not make a view. Make a view that gives you your list and make sure your id is there. With your id, you can then go to the document and get all the rest of the required information that you need in order to update it if you need to.
There really is no reason you would ever need to hit every document individually. They have views and search2.0 for that. Keep in mind you are using a cloud based technology. This stuff is not sitting in your basement, you can't just hit it a million times per device in a few seconds and expect anyone to not notice and/or get upset (an exaggeration, yes I know).
What I do not understand is that you are trying to replicate it to an iPhone? Are you running apache and couchdb in your app? Why not just read the JSON data and throw it into a database. or just throw it into a file if it updates that much and keep overwriting it. There is so many options that are a whole lot less messy.
I am in the middle of developing an app which harvests tweets, Facebook statuses and Facebook photos for a user. Currently the user sets out exactly when and to they want this harvest to occur and a spider pulls the data during this period. The when and to is stored in a MySQL db and my plan was to store all the tweets, status and photo meta-data in MongoDB (with the actual images on S3).
I was thinking I would just create one collection for each of the periods the user wants to harvest for and then store all the tweets etc from that period in that particular collection.
Does this seem like a reasonable approach?
Does this seem like a reasonable approach?
What the #1 user query? Is it "find activity by period"? If users only ever want to "find by period", then this makes sense.
However, if users want an accumulated view, now you have to gather history for a user and merge it for display.
If you want both a "by this period" and an "accumulated", then I suggest simply stuffing all data into a single user object. It's easy to tag the individual actions with a "harvest run" and a "timestamp".
Mongo Details: MongoDB can handle individual documents up to about 4MB. Most recent versions up this to 8 or 16MB. If you're only using this space for text, please realize that this is a lot of text. A copy of war & peace is just over 3MBs. So you're talking about hundreds of pages of text in 4MB. With 8 or 16MB, you can probably store status updates & tweets for years on most people.
Note that MongoDB has GridFS for storing binary data (like image files), so you'll typically store just pointers to these in the User document.