I have a bank/collection which caches instances of objects in memory so that each request doesn't need to go back to the datastore. I'd like Autofac to provide an instance of this bank, but then expire it after x seconds, so that a new instance is created on the next request. I'm having trouble getting my head around setting up a LifetimeScope to achieve this. I've read through this a couple of times. The Bank object is not really subject to a unit of work. It will ideally reside 'above' all units of work, caching objects within and across them.
I'm currently using the approach below, however it isn't working as I'd hoped.
Can someone please point me in the right direction?
....
builder.Register(c =>
{
return new ORMapBank(c.Resolve<IORMapRoot>());
}).InstancePerMatchingLifetimeScope(ExpireTimeTag.Tag());
IContainer container = builder.Build();
var TimedCache= RootScope.BeginLifetimeScope(ExpireTimeTag.Tag());
DependencyResolver.SetResolver(new AutofacDependencyResolver(TimedCache));
....
public static class ExpireTimeTag
{
static DateTime d = DateTime.Now;
static Object tag = new Object();
public static object Tag()
{
if (d.AddSeconds(10) < DateTime.Now)
{
CreateTag();
return tag;
}
private static void CreateTag()
{
tag = new Object();
}
}
Thanks very much in advance.
It is common to use a caching decorator to achieve this kind of behaviour. Assuming your IORMapRoot is responsible for getting the data in question (but it would work the same if ORMapBank) you do the following:
Create a new type, CachingORMapRoot that implements IORMapRoot
Add a constructor that takes the expiry TimeSpan and an instance of the original IORMapRoot implementation.
Implement the members to call the underlying instance and then cache the results accordingly for subsequent calls (implementation will vary on your cache technology).
Register this type in the container as IORMapRoot
This is a very clean way to implement such caching. It also makes it easy to switch between cached and non-cached implementations.
Related
Instead of writing a code like
FindObjectOfType<GameManager>().gameOver()
I would like to type just
gm.gameOver()
Is there a way to do that in Unity?
Maybe using some kind of alias, or some kind of namespace or something else. I am after making my code clean, so using GameManger gm = FindObjectOfType() in every file that uses a the GameManager is not what I am looking for.
In general I have to discourage this question. This is very questionable and I would actually not recommend this kind of shortening aliases for types and especially not for a complete method call ... bad enough when it is done with variables and fields by a lot of people.
Always use proper variable and field names thus that by reading the code you already know what you are dealing with!
how about storing it in a variable (or class field) at the beginning or whenever needed (but as early as possible)
// You could also reference it already in the Inspector
// and skip the FindObjectOfType call entirely
[SerializeField] private _GameManager gm;
private void Awake()
{
if(!gm) gm = FindObjectOfType<_GameManager>();
}
and then later use
gm.gameOver();
where needed.
In general you should do this only once because FindObjectOfType is a very performance intense call.
This has to be done of course for each class wanting to use the _GameManager instance ...
However this would mostly be the preferred way to go.
Alternatively you could also (ab)use a Singleton pattern ... it is controversial and a lot of people hate it kind of ... but actually in the end FindObjectOfType on the design side does kind of the same thing and is even worse in performance ...
public class _GameManager : MonoBehaviour
{
// Backing field where the instance reference will actually be stored
private static _GameManager instance;
// A public read-only property for either returning the existing reference
// finding it in the scene
// or creating one if not existing at all
public static _GameManager Instance
{
get
{
// if the reference exists already simply return it
if(instance) return instance;
// otherwise find it in the scene
instance = FindObjectOfType<_GameManager>();
// if found return it now
if(instance) return instance;
// otherwise a lazy creation of the object if not existing in scene
instance = new GameObject("_GameManager").AddComponent<_GameManager>();
return instance;
}
}
private void Awake()
{
instance = this;
}
}
so you can at least reduce it to
_GameManager.Instance.gameOver();
the only alias you can create now would be using a using statement at the top of the file like e.g.
using gm = _GameManager;
then you can use
gm.Instance.gameOver();
it probably won't get much shorter then this.
But as said this is very questionable and doesn't bring any real benefit, it only makes your code worse to read/maintain! What if later in time you also have a GridManager and a GroupMaster? Then calling something gm is only confusing ;)
Btw you shouldn't start types with a _ .. rather call it e.g. MyGameManager or use a different namespace if you wanted to avoid name conflicts with an existing type
ConfigProperty.idPropertyMap is filled on the server side. (verified via log output)
Accessing it on the client side shows it's empty. :-( (verified via log output)
Is this some default behaviour? (I don't think so)
Is the problem maybe related to the inner class ConfigProperty.IdPropertyMap, java.util.HashMap usage, serialization or some field access modifier issue?
Thanks for your help
// the transfer object
public class ConfigProperty implements IsSerializable, Comparable {
...
static public class IdPropertyMap extends HashMap
implements IsSerializable
{
...
}
protected static IdPropertyMap idPropertyMap = new IdPropertyMap();
...
}
// the server service
public class ManagerServiceImpl extends RemoteServiceServlet implements
ManagerService
{
...
public IdPropertyMap getConfigProps(String timeToken)
throws ConfiguratorException
{
...
}
}
added from below after some good answers (thanks!):
answer bottom line: static field sync is not implemented/supported currently. someone/me would have to file a feature request
just my perspective (an fallen-in-love newby to GWT :-)):
I understand pretty good (not perfect! ;-)) the possible implications of "global" variable syncing (a dependency graph or usage of annotations could be useful).
But from a new (otherwise experienced Java EE/web) user it looks like this:
you create some myapp.shared.dto.MyClass class (dto = data transfer objects)
you add some static fields in it that just represent collections of those objects (and maybe some other DTOs)
you can also do this on the client side and all the other static methods work as well
only thing not working is synchronization (which is not sooo bad in the first place)
BUT: some provided annotation, let's say #Transfer static Collection<MyClass> myObjList; would be handy, since I seem to know the impact and benefits that this would bring.
In my case it's rather simple since the client is more static, but would like to have this data without explicitely implementing it if the GWT framework could do it.
static variables are purely class variable It has nothing to do with individual instances. serialization applies only to object.
So ,your are getting always empty a ConfigProperty.idPropertyMap
The idea of RPC is not that you can act as though the client and the server are exactly the same JVM, but that they can share the objects that you pass over the wire. To send a static field over the wire, from the server to the client, the object stored in that field must be returned from the RPC method.
Static properties are not serialized and sent over the wire, because they do not belong to a single object, but to the class itself.
public class MyData implements Serializable {
protected String name;//sent over the wire, each MyData has its own name
protected String key;
protected static String masterKey;//All objects on the server or client
// share this, it cannot be sent over RPC. Instead, another RPC method
// could access it
}
Note, however, that it will only be that one instance which will be shared - if something else on the server changes that field, all clients which have asked for a copy will need to be updated
I am trying to create a dynamic navigation class.
class myApp_Helper_Breadcrum{
protected $navigationArray=array();
private static $_instance = null;
public static function getInstance()
{
if (!isset(self::$_instance)) {
self::$_instance = new self();
}
return self::$_instance;
}
private function __construct() {
$this->navigationArray = array();
}
public function popin($popInElement){
array_push($this->navigationArray,$popInElement);
}
public function displayLinks()
{
//print array
}
}
In boostrap I did following
$nlinks=myApp_Helper_Breadcrum::getInstance();
Zend_Registry::set('nlinks',$nlinks);
Now in my controller I am calling as follow
$nlinks= Zend_Registry::get('nlinks');
$nlinks->popin('Home');
$nlinks->displayLinks();
The problem is, even if this class is singleton the constructor is called again and again which makes my array to initialize. what I am trying to achieve is to keep pushing the items in the navigation array as I navigate the site.
Any idea why it is like this in ZF?
PHP isn't running like Java would where you have a JVM to maintain the state of your classes. In Java you can have a singleton behave exactly as you describe, but in PHP all the classes are refreshed with each subsequent call to the web server. So your singleton will stay in place for the duration of that call to the server, but once the response is sent then you start over again on the next call.
If you want to maintain state through successive calls you need to use the $_SESSION to keep track of your state.
EDIT:
My answer above deals with PHP in general and not the Zend Framework specifically. See my comment below.
Try to define your component as below:
class MyApp_Helper_Breadcrum
{
private static $_instance = null; // use private here
public static function getInstance()
{
if (self::$_instance === null) { // use strictly equal to null
self::$_instance = new self();
}
return self::$_instance;
}
private function __construct() // use private here
{
// ...
}
// ...
}
I ran into the exact same problem.
The problem is that the persistence of your classes are on the request scope.
And with zend, you can even have multiple requests for a page load.
PHP is a shared nothing architecture; each
request starts in a new process, and at the end of the request, it's all
thrown away. Persisting across requests simply cannot happen -- unless
you do your own caching. You can serialize objects and restore them --
but pragmatically, in most cases you'll get very little benefit from
this (and often run into all sorts of issues, particularly when it comes
to resource handles).
You may want to use Zend_cache, for persistence
Even though this is old, I would like to add my 2 cent.
Zend DOES NOT create a singleton, that persists across multiple requests. Regardless of the interpretation of the ZF documentation, on each request, the whole stack is re-initialized.
This is where your problem comes from. Since bootstrapping is done on each request, each request also re-initializes your helper method. As far as I know, helpers in ZF 1.x CAN'T be singletons.
The only way I see this being implementes ar you want it to be, is using sessions.
I'm thinking of introducing some kind of caching mechanism (like HTML5 local storage) to avoid frequent RPC calls whenever possible. I would like to get feedback on how caching can be introduced in the below piece of code without changing much of the architecture (like using gwt-dispatch).
void getData() {
/* Loading indicator code skipped */
/* Below is a gwt-maven plugin generated singleton for SomeServiceAsync */
SomeServiceAsync.Util.getInstance().getDataBySearchCriteria(searchCriteria, new AsyncCallback<List<MyData>>() {
public void onFailure(Throwable caught) {
/* Loading indicator code skipped */
Window.alert("Problem : " + caught.getMessage());
}
public void onSuccess(List<MyData> dataList) {
/* Loading indicator code skipped */
}
});
}
One way I can think of to deal with this is to have a custom MyAsyncCallback class defining onSuccess/onFailure methods and then do something like this -
void getData() {
AsyncCallback<List<MyData>> callback = new MyAsyncCallback<List<MyData>>;
// Check if data is present in cache
if(cacheIsPresent)
callback.onSuccess(dataRetrievedFromCache);
else
// Call RPC and same as above and of course, update cache wherever appropriate
}
Apart from this, I had one more question. What is the maximum size of storage available for LocalStorage for popular browsers and how do the browsers manage the LocalStorage for different applications / URLs? Any pointers will be appreciated.
I suggest to add a delegate class which handles the caching. The delegate class could look like this:
public class Delegate {
private static SomeServiceAsync service = SomeServiceAsync.Util.getInstance();
private List<MyData> data;
public static void getData(Callback callback) {
if (date != null) {
callback.onSuccess(data);
} else {
service.getData(new Callback() {
public onSuccess(List<MyData> result) {
data = result;
callback.onSuccess(result);
});
}
}
}
Of course this is a crude sample, you have to refine the code to make it reliable.
I did take too long to decide on using hash maps to cache results.
My strategy was not to use a singleton hashmap, but a singleton common objects class storing static instances of cache. I did not see the reason to load a single hashmap with excessive levels of hashtree branching.
Reduce the amount of hash resolution
If I know that the objects I am dealing with is Employee, Address, Project, I would create three static hashes
final static private Map<Long, Employee> employeeCache =
new HashMap<Long, Employee>();
final static private Map<Long, Address> addressCache =
new HashMap<Long, Address>();
final static private Map<String name, Project> projectCache =
new HashMap<String name, Project>();
public static void putEmployee(Long id, Employee emp){
employeeCache.put(id, emp);
}
public static Employee getEmployee(Long id){
return employeeCache.get(id);
}
public static void putEmployee(Long id, Address addr){
addressCache.put(id, addr);
}
public static Address getEmployee(Long id){
return addressCache.get(id);
}
public static void putProject(String name, Address addr){
projectCache.put(name, addr);
}
public static Address getProject(String name){
return projectCache.get(name);
}
Putting it all in a single map would be hairy. The principle of efficient access and storage of data is - the more information you have determined about the data, the more you should exploit segregating that data using that information you have. It would reduce the levels of hash resolution required to access the data. Not to mention all the risky and indefinite type casting that would need to be done.
Avoid hashing if you can
If you know that you always have a single value of CurrentEmployee and NextEmployee,
avoid storing them in the hash of Employee. Just create static instances
Employee CurrentEmployee, NextEmployee;
That would avoid needing any hash resolution at all.
Avoid contaminating the global namespace
And if possible, keep them as class instances rather than static instances, to avoid contaminating the global namespace.
Why avoid contaminating the global namespace? Because, more than one class would inadvertently use the same name causing untold number of bugs due to global namespace confusion.
Keep the cache nearest to where it is expected or used
If possible, if the cache is mainly for a certain class, keep the cache as a class instance within that class. And provide an eventbus event for any rare instance that another class would need to get data from that cache.
So that you would have an expectable pattern
ZZZManager.getZZZ(id);
Finalise the cache if possible,
otherwise/and privatise it by providing putters and getters. Do not allow another class to inadvertently re-instantiate the cache, especially if one day your class becomes a general utility library. Also putters and getters have the opportunity to validate the request to avoid a request from wiping out the cache or pushing the app into an Exception by directly presenting the cache with keys or values the cache is unable to handle.
Translating these principles into Javascript local storage
The GWT page says
Judicious use of naming conventions can help with processing storage data. For example, in a web app named MyWebApp, key-value data associated with rows in a UI table named Stock could have key names prefixed with MyWebApp.Stock.
Therefore, supplementing the HashMap in your class, with rather crude code,
public class EmployeePresenter {
Storage empStore = Storage.getLocalStorageIfSupported();
HashMap<Long, Employee> employeeCache;
public EmployeePresenter(){
if (empStore==null) {
employeeCache = new HashMap<Employee>();
}
}
private String getPrefix(){
return this.getClass()+".Employee";
//return this.getClass().getCanonicalName()+".Employee";
}
public Employee putEmployee(Long id, Employee employee)
if (empStore==null) {
stockStore.setItem(getPrefix()+id, jsonEncode(employee));
return;
}
employeeCache.put(id, employee);
}
public Employee getEmployee(Long id)
if (empStore==null) {
return (Employee) jsonDecode(Employee.class, stockStore.getItem(getPrefix()+id));
}
return employeeCache(id);
}
}
Since, the localstore is string based only, I am presuming that you will be writing your own json encoder decoder. On the other hand, why not write the json directly into the store the moment you receive it from the callback?
Memory constraints?
I cannot profess expertise in this question but I predict the answer for hashmaps to be the maximum memory constrained by the OS on the browser. Minus all the memory that is already consumed by the browser, plugins and javascript, etc, etc overhead.
For HTML5 local storage the GWT page says
"LocalStorage: 5MB per app per browser. According to the HTML5 spec, this limit can be increased by the user when needed; however, only a few browsers support this."
"SessionStorage: Limited only by system memory"
Since you are using gwt-dispath an easy solution here is to cache the gwt-dispatch Response objects agains the Request objects as a key in a Map. Its easy to implement and type agnostic. You will need to override Request - equals() method to see if the Request is already in the cache. If yes return Response from cache otherwise hit the server with a call.
IMO - LocalStorage is not a necessity here if all you need is in session cache for performance. Local Storage only a must for offline apps.
You may look into this - http://turbomanage.wordpress.com/2010/07/12/caching-batching-dispatcher-for-gwt-dispatch/
This is pretty basic, but sort of a generic issue so I want to hear what people's thoughts are. I have a situation where I need to take an existing MSI file and update it with a few standard modifications and spit out a new MSI file (duplication of old file with changes).
I started writing this with a few public methods and a basic input path for the original MSI. The thing is, for this to work properly, a strict path of calls has to be followed from the caller:
var custom = CustomPackage(sourcemsipath);
custom.Duplicate(targetmsipath);
custom.Upgrade();
custom.Save();
custom.WriteSmsXmlFile(targetxmlpath);
Would it be better to put all the conversion logic as part of the constructor instead of making them available as public methods? (in order to avoid having the caller have to know what the "proper order" is):
var custom = CustomPackage(sourcemsipath, targetmsipath); // saves converted msi
custom.WriteSmsXmlFile(targetxmlpath); // saves optional xml for sms
The constructor would then directly duplicate the MSI file, upgrade it and save it to the target location. The "WriteSmsXmlFile is still a public method since it is not always required.
Personally I don't like to have the constructor actually "do stuff" - I prefer to be able to call public methods, but it seems wrong to assume that the caller should know the proper order of calls?
An alternative would be to duplicate the file first, and then pass the duplicated file to the constructor - but it seems better to have the class do this on its own.
Maybe I got it all backwards and need two classes instead: SourcePackage, TargetPackage and pass the SourcePackage into the constructor of the TargetPackage?
I'd go with your first thought: put all of the conversion logic into one place. No reason to expose that sequence to users.
Incidentally, I agree with you about not putting actions into a constructor. I'd probably not do this in the constructor, and instead do it in a separate converter method, but that's personal taste.
It may be just me, but the thought of a constructor doing all these things makes me shiver. But why not provide a static method, which does all this:
public class CustomPackage
{
private CustomPackage(String sourcePath)
{
...
}
public static CustomPackage Create(String sourcePath, String targetPath)
{
var custom = CustomPackage(sourcePath);
custom.Duplicate(targetPath);
custom.Upgrade();
custom.Save();
return custom;
}
}
The actual advantage of this method is, that you won't have to give out an instance of CustomPackage unless the conversion process actually succeeded (safe of the optional parts).
Edit In C#, this factory method can even be used (by using delegates) as a "true" factory according to the Factory Pattern:
public interface ICustomizedPackage
{
...
}
public class CustomPackage: ICustomizedPackage
{
...
}
public class Consumer
{
public delegate ICustomizedPackage Factory(String,String);
private Factory factory;
public Consumer(Factory factory)
{
this.factory = factory;
}
private ICustomizedPackage CreatePackage()
{
return factory.Invoke(..., ...);
}
...
}
and later:
new Consumer(CustomPackage.Create);
You're right to think that the constructor shouldn't do any more work than to simply initialize the object.
Sounds to me like what you need is a Convert(targetmsipath) function that wraps the calls to Duplicate, Upgrade and Save, thereby removing the need for the caller to know the correct order of operations, while at the same time keeping the logic out of the constructor.
You can also overload it to include a targetxmlpath parameter that, when supplied, also calls the WriteSmsXmlFile function. That way all the related operations are called from the same function on the caller's side and the order of operations is always correct.
In such situations I typicaly use the following design:
var task = new Task(src, dst); // required params goes to constructor
task.Progress = ProgressHandler; // optional params setup
task.Run();
I think there are service-oriented ways and object-oritented ways.
The service-oriented way would be to create series of filters that passes along an immutable data transfer object (entity).
var service1 = new Msi1Service();
var msi1 = service1.ReadFromFile(sourceMsiPath);
var service2 = new MsiCustomService();
var msi2 = service2.Convert(msi1);
service2.WriteToFile(msi2, targetMsiPath);
service2.WriteSmsXmlFile(msi2, targetXmlPath);
The object-oriented ways can use decorator pattern.
var decoratedMsi = new CustomMsiDecorator(new MsiFile(sourceMsiPath));
decoratedMsi.WriteToFile(targetMsiPath);
decoratedMsi.WriteSmsXmlFile(targetXmlPath);