I wonder how memcached caches a list of objects internally? Does it serializes the object list and stick it in the cache, or it serializes each individual object, and stick them in keys but all sharing a common signature so they can be retrieved together later on?
Thanks,
David
This is up to the client library and both approaches could be implemented. It is probably more likely to be that the entire list is serialized and that is how things are done in the Spymemcached client.
Related
we use Azure Caching directly (and not through one of the available Entity Framework wrappers). Apparently, for distributed caching, we need to serialize the objects. Unfortunately, this causes issues with lazy-loaded DbContext-based proxies used for navigation properties.
I see we can use a custom serializer in order to map proxies to empty collections (if not loaded) or to normal objects (if loaded), but I am not sure about the implementation. One possible implementation can be based on the one used by WCF, but I am not sure Azure works the same way.
The ideal solution (and that's why I point to ProxyDataContractResolver) would be one where, when serialization happens:
IF the navigation property has been already loaded the data would be serialized as if it were a normal Collection,
and if they are not loaded, they won't be serialized (I would like lazy loading to work back after deserialization for the latter case, but it's acceptable if it doesn't).
Has anyone manually fixed that problem in an elegant way?
Thanks in advance!
I will presume that if you are wanting to cache EF objects, you don't require lazy loading or change tracking on those entities.
I believe that both of those are enabled through object proxies that will cause serialization issues (since you don't want to serialize the proxy).
If you disable the property DbContext.Configuration.ProxyCreationEnabled then serialization of the actual object, not the proxy, should work fine. This is typically required when returning POCO objects over WCF but is likley the same for other serializations scenarios such as this.
If you detach the EF entity from the DbContext before serializing it, that disables lazy loading, so your custom serializer won't try to serialize anything that isn't already part of the entity's graph.
Then when you get it back from the cache, if you attach it to a new (identical) DbContext, that should reenable lazy loading.
(Caveat: once you detach the entity from the context, any new queries that include that same object will create a new, attached, copy, so you will need to code with some care to avoid running into trouble with multiple potentially-different versions of the same object running around. But that said, this should let you do what you want.)
this is a bit of an odd question, so I'll start at the beginning...
My needs for NSFetchedResultsController (NSFRC) are the ability to perform filtering and sorting after the objects have been fetched, mostly because the filtering and sorting require querying the fetched objects themselves, and is therefore not possible with NSFRC. So, I wrote my own class, BSFetchedResultsController, which aims to replicate the functionality of NSFRC (delegate notifications, automatic sectioning and caching) but with added hooks for the user to set their own blocks for filtering and sorting. The code for this class is on github here if anyone wants it: https://github.com/blindingskies/BSFetchedResultsController, although I wouldn't consider the class ready yet as a drop in replacement of NSFRC.
So, I've not yet implementing caching, mostly because I'm not really sure how Apple has implemented it. The caches are stored in binary files here:
{app dir}/Library/Caches/.CoreDataCaches/SectionInfoCaches/{cache name}/sectionInfo
So, presumably, my class would need to store its caches in a similar location? How is this structure organised/work? The cache needs to store the NSFetchPredicate (or properties required to re-generate it), and it needs to archive the fetched objects somehow. But, NSManagedObject doesn't conform to NSCoding, so, how does it archive the objects? And lastly during the NSNotificationCenterDidChangeNotification handler the cache needs to be updated.
So, the real aspect of this is how to archive the fetched objects, I'm leaning towards just saving the objectIDs in an array? And then just get those objects from the context. Is that enough?
If anyone has thought about how to implement
Okay, so to answer my own question, I've implemented the cache as follows:
Created another class which retains the entity (NSEntityDescription), fetch predicate (NSPredicate) and sort descriptors (NSArray) of the NSFetchPredicate, along with the sectionNameKeyPath and additional BSFetchedResultsController objects (post fetch predicate, filter, comparator). Make this class NSCoding compliant.
Then at the start of the performFetch: method, if there is a cache name, unarchive the object and see if the properties match the BSFRC, and if it does, then use the cache's section data.
Then add another notification handler, to NSManagedObjectContextDidSaveNotification to flush the objects to the cache.
A couple of points... I found that archiving the NSFetchRequest directly (which is NSCoding compliant) didn't work, and at the moment, am only checking the name of the NSEntityDescription.
Also, I don't cache the whole object graph, just the URIRepresentation of the NSManagedObject's NSManangedObjectIDs. Then, I respawn these URIs given the managed object context after validating the cache.
It seems to work, although I'm not sure how often I should flush the objects to the cache...
I have a list of objects that can sometimes change, and I want to keep a persistent cache on the device whenever the app is closed or move to the background.
Most of the objects in the list will not change, so i was wondering what is the best way to save the list. I have two major options i think about:
Using NSKeyedArchiver / unArchiver - This is the most convenient method, because the objects i'm serializing hold other custom objects, so this way i can just write a custom encode method for each of them. The major problem is that i didn't find on Google how to serialize only the changed objects, and serializing the entire list every time seems very wasteful.
Using SQLite - this is what i'm currently using, and the worst problem here is that adding \ changing properties of the objects is very complicated, and much less elegant.
Is there any way that i can enjoy the convenience of NSKeyedArchiver but only serialize the changed objects?
Like Adam Ko I would suggest using Core Data:
This kind of problem is what it's really good at, after all!
If your cache items are independent from each other, this could be achieved by simply wrapping your cache-items by a thin layer of NSManagedObject (i.e. you could benefit from Core Data with only minor changes to your app).
This wrapper entity could store an archived version of a cache item in an attribute of type NSBinaryDataAttributeType and provide access to the unarchived object through a transient property.
See Non-Standard Persistent Attributes for an example.
I have a few Moose objects and some other simple hash objects (hashes, arrays) I'd like to serialize.
At first, I used a simple
my $obj_store_file = nstore($obj);
and
my $obj = retrieve($obj_store_file);
This worked well.
Later, I found about MooseX::Storage and KiokuDB. I tried using them to enjoy some benefits they have, but:
MooseX::Storage seemed to recreate objects that are referred multiple times. For example, one of my serialized objects contains a few attributes, which each of them refers to the same instance of another object. Before serialization, all of these reference are obviously the same -- they all point to the same object. After serialization/de--serialization using MooseX::Storage, this once single object is duplicated and each reference points to another instance of the object. I was told that MooseX::Storage is not appropriate to represent object graphs and that I might want to try KiokuDB.
I did, although I felt KiokuDB is an overkill for my needs. I don't need all the fancy stuff a DB can offer. Unfortunately, since one of my objects is really large and choaks on memory when serialized using defaults, it seems I have to write a custom serializer or store its 'data' portion separately then write a costume KiokuX::Module... again, quite an overkill.
So, I'm back to plain ol' Storable or YAML. My question is simple: yes, there are some benefits for KiokuDB (especially the fact it maintains an object graph) and perhaps also for MooseX::Storage (although I couldn't really find any for the latter). But, given these benefits are not really of use to me, is there any reason not to use Storable or YAML?
In other words, is there anything wrong with storing a (Moose) object this way? Is it 'illegal'?
My experience is that it depends on why you're serializing data. I like Storable for program state including things like window size/position. I prefer YAML for configuration data or anything you might want to exchange with another copy of the application. (i.e. share between users -- a Storable file might not be readable by a user with a different version of Perl or Storable.) Storable supports object graphs (assuming that the freeze/thaw is done correctly). I'm not sure about YAML.
I've just started learning Google Web Toolkit and finished writing the Stock Watcher tutorial app.
Is my thinking correct that if one wants to persist a business object (like a Stock) using JDO and send it back and forth to/from the client over RPC then one has to create two separate classes for that object: One with the JDO annotations for persisting it on the server and another which is serialisable and used over RPC?
I notice the Stock Watcher has separate classes and I can theorise why:
Otherwise the gwt compiler would try
to generate javascript for everything
the persisted class referenced like
JDO and com.google.blah.users.User, etc
Also there may be logic on the server-side
class which doesn't apply to the client
and vice-versa.
I just want to make sure I'm understanding this correctly. I don't want to have to create two versions of all my business object classes which I want to use over RPC if I don't have to.
The short answer is: you don't need to create duplicate classes.
I recommend that you take a look from the following google groups discussion on the gwt-contributors list:
http://groups.google.com/group/google-web-toolkit-contributors/browse_thread/thread/3c768d8d33bfb1dc/5a38aa812c0ac52b
Here is an interesting excerpt:
If this is all you're interested in, I
described a way to make GAE and
GWT-RPC work together "out of the
box". Just declare your entities as:
#PersistenceCapable(identityType =
IdentityType.APPLICATION, detachable
= "false") public class MyPojo implements Serializable { }
and everything will work, but you'll
have to manually deal with
re-attachment when sending objects
from the client back to the server.
You can use this option, and you will not need a mirror (DTO) class.
You can also try gilead (former hibernate4gwt), which takes care of some details within the problems of serializing enhanced objects.
Your assessment is correct. JDO replaces instances of Collections with their own implementations, in order to sniff when the object graph changes, I suppose. These implementations are not known by the GWT compiler, so it will not be able to serialize them. This happens often for classes that are composed of otherwise GWT compliant types, but with JDO annotations, especially if some of the object properties are Collections.
For a detailed explanation and a workaround, check out this pretty influential essay on the topic: http://timepedia.blogspot.com/2009/04/google-appengine-and-gwt-now-marriage.html
I finally found a solution. Don't change your object at all, but for the listing do it this way:
List<YourCustomObject> secureList=(List<YourCustomObject>)pm.newQuery(query).execute();
return new ArrayList<YourCustomObject>(secureList);
The actual problem is not in Serializing the Object... the problem is to Serialize the Collection class which is implemented by Google and is not allowed to Serialize out.
You do not have to create two versions of the domain model.
Here are two tips:
Use a String encoded key, not the Appengine Key class.
pojo = pm.detachCopy(pojo)
...will remove all the JDO enhancements.
You don't have to create separate instances at all, in fact you're better off not doing it. Your JDO objects should be plain POJOs anyway, and should never contain business logic. That's for your business layer, not your persistent objects themselves.
All you need to do is include the source for the annotations you are using and GWT should compile your class just fine. Also, you want to avoid using libraries that GWT can't compile (like things that use reflection, etc.), but in all the projects I've done this has never been a problem.
I think that a better format to send objects through GWT is through JSON. In this case from the server a JSON string would be sent which would then have to be parsed in the client. The advantage is that the final Javascript which is rendered in the browser has a smaller size. thus causing the page to load faster.
Secondly to send objects through GWT, the objects should be serializable. This may not be the case for all objects
Thirdly GWT has inbuilt functions to handle JSON... so no issues on the client end