I want to retrieve data from memcache without affecting the LRU. Is there a way to fetch the items without changing their counter "timestamp"?
There isn't a specific command that will allow you to do this. One option would be to just remove the key from the cache after you get it if you know that you won't need it for a long time.
Related
When I used TTL to make data expired in mongodb, I was wondering if there is a way to modify the default behavior of TTL, which currently will remove the expired data.
For example, I have data with the fields isUsed and expiredAt, when data expires, I want to reset isUsed to false instead of removing it. Is there any way to do that?
Thanks
No, you can not change this. TTL stands for "time to live", not time to modify, or time to trigger a custom function.
Stands for “time to live” and represents an expiration time or period
for a given piece of information to remain in a cache or other
temporary storage before the system deletes it or ages it out.
So if you really need to do something like this, you have to write your custom logic.
I have a collection with a bunch of documents representing various items. Once in a while, I need to update item properties, but the update takes some time. When properties are updated, the item gets a new timestamp for when it was modified. If I run updates one at a time, then there is no problem. However, if I want to run multiple update processes simultaneously, it's possible that one process starts updating the item, but the next process still sees the item as needing an update and starts updating it as well.
One solution is to mark the item as soon as it is retrieved for update (findAndModify), but it seems wasteful to add a whole extra field to every document just to keep track of items currently being updated.
This should be a very common issue. Maybe there are some built-in functions that exist to address it? If not, is there a standard established method to deal with it?
I apologize if this has been addressed before, but I am having a hard time finding this information. I may just be using the wrong terms.
You could use db.currentOp() to check if an update is already in flight.
First of all I want to show how I made this in SQL:
Both the location and environment table will never contain more than those four rows. Each log can only be associated with 4 rows.
What I don't understand is how do I even start writing code that will take whatever the user has chosen, based on state switches etc in my UI and persist this?
Because when the user are done I want to store a "log-record", and the log-record may have location and environment rows associated with it. And what happen when the user let say, choose all the location rows, four times a row....does it add the location to the location "entity" every time? Would I end up with a lot of duplicated data? I would appreciate any help that can show me how to do this. Thank you!
Looks like you need three entities. You'll have Location and Environment entities that have whichever attributes they need, and a Log entity that has relationship with both Environment and Location. I think you're asking if instances of Location and Environment that happen to be the same will be duplicated in the core data store, or if multiple Log instances will relate to the same Location and Environment instances. Is that right? Answer: It's up to you. Say you want to save a Location instance that has a particular set of attributes. You could first search for one that has that exact set of attributes and associate it with your Log instance, or you could just create a new Location instance and not worry about the duplication. If you're storing zillions of these Log entries, the first plan might save a lot of space. If you're not saving them all that often, and particularly if the user can go back and change the data associated with a Log instance, you might want to use separate instances even if they happen to be the same.
What I'm trying to accomplish is the following: I need to limit the amount of core data entries to 50. So if the user enters their 50th entry then the app would delete the oldest entry and add the new entry to the top of the stack. So basically, if the user never deletes entries and if there are 50 entries in core data then, when the user tries to add a new entry, the app would delete the oldest entry and add the user's new entry. Basically, I'm trying to have a history sort of thing but I don't want the user to be able to go past 50 entries however I want them to be able to add new entries when their at the 50 limit by just dropping the oldest one and adding the newest one. What would be the easiest way to do this? I'm new to core data and having a hard time understanding a lot of it. Here's the code / example app that I'm working with. LINK TO EXAMPLE APP THAT I'M USING Thanks for the help.
Let's say you have an entity called History. The easiest solution would be to add a creationDate attribute to your entities. Then use that to manage your History objects.
You will need three fetches:
The first one will fetch as faults all the existing History objects and then count them. If the count is <50, then just add the new History object and your done.
If the count>=50, then do a fetch for specific value and use the #max or #min (I forget which for dates) collections operator to find the oldest creationDate. (As luck would have it the example at the link it pretty much exactly what you need.)
Perform a fetch for the object with the creationDate returned by (2) and delete it.
Then add the new history object.
OK, that's fine. CoreData is not going to do this for you, but you can do it yourself.
You can retrieve objects from you context using an NSFetchRequest, and you can delete them using -[NSManagedObjectContext deleteObject:]. You can sort them using NSSortDescriptor objects.
HI All,
I currently have an NSFetchedResultsController setup to return all rows in a table in my core data database. This then fills up my UITableView. The trouble is this will quickly get out of hand as the rows grow in number.
How can I limit the initial query to say 20 results, then add a button somewhere to "Get More" from where we left off?
Thanks for any guidance as always
This is controlled with NSFetchRequest's -setFetchLimit: and -setFetchOffSet.
If I recall correctly, the drawback with NSFetchedResultsController is that you can't modify the fetch request after you create your NSFetchedResultsController instance. I believe this means you'll have to create a new one (instance w/new fetch request) each time you change the range you want to retrieve/display.
File an enhancement request with Apple at bugreporter.apple.com if you feel this shouldn't be the case.
To change the limit number on the fly you simply need to:
Access the fetchRequest of your NSFetchedResultsController instance, change the limit, delete the old cache if there is any and perform a new fetch.
Code:
[yourFetchedResultsController.fetchRequest setFetchLimit:50];
[NSFetchedResultsController deleteCacheWithName:"you cache name"];
[yourFetchedResultsController performFetch:nil];
fetchBatchSize only affects how many objects are fetched at a time. It will not limit number of objects in-memory concurrently so it is still possible to run out of memory. It is possible to limit the total concurrent objects with a combination of batchSize, fetchLimit, and offset but it requires deleting the cache or storing separate caches per "page", which seems un-ideal to me.
Another more hacky method to get around it is to re-create the NSFetchedResultsController, the results from the old controller will be faulted if possible, and you can start with a clean slate. Really crude, but it avoids deleting the cache.
I believe that instead of setting -setFetchLimit and limiting your NSFetchRequest (for new rows you have to create a new reqeust), set -fetchBatchSize to only control how many rows will be loaded into memory. Say, If you show 10 cells per view, set your batch size to double or so. As you scroll your view, the controller will automatically load new set into memory.