Data persisted on the disk is not loaded by BootstrapCacheloader - persistence

I am using Ehcache (quite older version 1.3), I have a requirement where my persisted data should be loaded in memory when my application restarts, I am using a custom BootstrapCacheLoader for this, in the load method of the loader when I iterate over the keys it returns me nothing, it is empty but I can see data stored in .data file.
I use List<?> keys = mycache.getKeys();
and I get keys (list) as empty.
Anything I am missing?

Related

getAll() method of Infinispan shows wrong hit miss count

I have downloaded infinispan 7.2.5 version and created a simple server in a cluster mode. I have placed it in a jboss folder(locally) and started the server. I have created a simple client that puts the data into the cache and another client that gets the data from the cache in eclipse.
what I have observed is in a cluster mode when I fetch single value that is not present into the cache, using getAll() method, getHits method give me 1 count (Ideally it should have been miss 1) also when I try to get a value which is already present into the cache, getHits method shows count as 2.
Also in a standalone environment when i fetched a single value from cache using getAll() method which is not present I get getMisses() method count as 1 (which is expected) but again in standalone mode also when I try to get a single value which is present in cache I get getHits method count as 2.
This is really confusing me as why this is happening. while other get() methods of infinispan behave in an expected way. This seems to happens only with getAll() method.
why is this happening?

Possible memory leak with spring-data-mongodb

I'm having what I think is a memory leak with spring-data-mongodb.
Basically we're using MongoDB as a sort of cache for a RDBMS, so when the application starts we load a big chunk of the database.
So basically we are mapping/denormalising different JPA Entities to Mongo documents using different "mapping" methods like this one :
#Override
public void insertFromContacts(Set<Contact> contacts, Long seed){
MutableLong sfId = new MutableLong(seed);
List<SocialInfo> socialInfos = contacts.stream().map(c -> {
SocialInfo socialInfo = new SocialInfo();
socialInfo.setId(sfId.longValue());
socialInfo.setSearchOnly(true);
socialInfo.setStatus(null);
socialInfo.setContactId(c.getId());
sfId.increment();
return socialInfo;
}).collect(Collectors.toList());
mongoTemplate.insertAll(socialInfos);
}
However the memory does not stop growing, so I did a heap dump and I realise that spring is keeping a huge amount of BasicDBObject references in memory and I don't know why?
When checking the shortest path to the accumulation point it shows that is apparently the earlyApplicationEvents property of the class
I'm using :
- Java 8
- Spring data mongodb 1.10.8.RELEASE
- Spring data commons 1.13.8.RELEASE
- Spring 4.3.6.RELEASE
Any ideas as why?
If you track down the usage of the field earlyApplicationEvents, it is basically for holding onto events during startup until the listeners can be registered, at which point it will get set to null. See here: https://github.com/spring-projects/spring-framework/blob/e7b77cb2b6c699b759a55cd81b345cca00ec5b64/spring-context/src/main/java/org/springframework/context/support/AbstractApplicationContext.java#L828
You mention that you do the processing at start-up so I guess this prevents registration of the listeners until your process finishes.
If you move that initialization code further back until after the application context is fully initialized, this should fix the issue. For example registering an event listener and react on the ContextRefreshedEvent should do the trick. The important part is to get after the call to registerListeners of the refresh process.

Garbage collection when using Zend_Session_Handler_DbTable

I am trying to use Zend_Session_Handler_DbTable to save my session data to the db but as far as i can see, the expired sessions are never deleted from the database.
I can see a cron job running (ubuntu) which deletes the file based sessions but I couldn't find how gc works on sessions which are saving in db.
The Zend_Session_SaveHandler_DbTable class has a garbage collection method called gc which is given to PHP via session_set_save_handler when you call Zend_Session::setSaveHandler().
The gc function should get called periodically based on the php.ini values session.gc_probability and session.gc_divisor. Make sure those values are set to something that would result in garbage collection running at some point.
Also make sure you specify the modifiedColumn and lifetimeColumn options when creating the DbTable save handler because the default gc function uses those columns to determine which rows in the session table are old and should be deleted.

Using Memcached to cache results from Model::find()

I'd like to store the DocumentSet returned from Model::find() in memcached. However, I get the MongoException below when I try to work with the results after retrieving them from cache. Specifically, when using foreach, the exception is thrown when calling if ($this->_resource->hasNext()) on line 63 of \data\source\mongo_db\Result.php
MongoException
The MongoCursor object has not been correctly initialized by its constructor
I can understand why this would be the case.
My question is, is there anyway to pre-populate a Model::find() or create my own DocumentSet so that I can work with the data? Normally, I'd just convert it to an array and store that in cache. However, I need access to some of the Model methods I've written (ex: Customer::fullName())
Update: I've found a bit of a workound that is ok but not great. I'm saving the Model::find() results as an array in cache $result->to('array'). Then, upon retrieval, I loop through the $results and populate a new array with Model::create($result, array("exists" => true) for each $result;
A DocumentSet returned by Model::find contains a Mongo db cursor. It doesn't load all of the data from the database until the items are iterated. As each item is iterated, a Document is created and is cached in memory into the DocumentSet object. The built-in php function iterator_to_array() can be used to turn the DocumentSet into an array of documents which you could cache.
As you mention in your update, you can also use ->to('array') to prepare for caching and then Model::create() to build it back up. One caveat with that method: when you use ->to('array') it also casts MongoId and MongoDate objects to strings and integers respectively. If you've defined a $_schema for your model and set the 'id' and 'date' types, they will be cast back to the original MongoId and MongoDate objects in the Document returned by Model::create(). If you don't have a schema for all of your fields, it can be a problem. I have a recent pull request that tries to make it possible to do ->to('array') without casting the native mongo objects (also fixes a problem with the mongo objects always being casted when inside arrays of sub-documents).
FWIW, I actually prefer saving just the data in cache because it's less space than serializing a whole php object and avoids potential issues with classes not being defined or other items not initialized when the item is pulled from cache.
I haven't tried this... but I would think you could make a cache strategy class that would take care of this transparently for you. Another example of the great care that went into making Lithium a very flexible and powerful framework. http://li3.me/docs/lithium/storage/cache/strategy

Core Data problem in iOS

I tried to fetch the records which are stored in the core data, and I logged the fetch objects with NSLog and they are as below.
<NSManagedObject: 0x4e31920> (entity: MyEntity; id: 0x4e30a80 <x-coredata://01F71B1D-B468-4FCC-B083-8254F375ADE5/MyEntity/p1> ; data: <fault>)
What is the meaning of "data: <fault>" ?
Is the data corrupted ?
Thanks
No. Data is not corrupted. Take a look here for a complete description of what is happening:
Core-Data: NSLog output Does Not Show "Fields"
When fetching the data, you can ask your fetchRequest to respond with data non-faulted by calling
[fetchRequest setReturnObjectsAsFaults:NO];
Here's some more details about faulting:
Faulting
Managed objects typically represent data held in a persistent store. In some situations a managed object may be a “fault”—an object whose property values have not yet been loaded from the external data store—see “Faulting and Uniquing” for more details. When you access persistent property values, the fault “fires” and the data is retrieved from the store automatically. This can be a comparatively expensive process (potentially requiring a round trip to the persistent store), and you may wish to avoid unnecessarily firing a fault (see “Faulting Behavior”).
Although the description method does not cause a fault to fire, if you implement a custom description method that accesses the object’s persistent properties, this will cause a fault to fire. You are strongly discouraged from overriding description in this way.
There is no way to load individual attributes of a managed object on an as-needed basis. For patterns to deal with large attributes, see “Large Data Objects (BLOBs).”
More information can be found here: https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdManagedObjects.html#//apple_ref/doc/uid/TP40003397-SW2