Singleton pattern using PHP - zend-framework

I am trying to create a dynamic navigation class.
class myApp_Helper_Breadcrum{
protected $navigationArray=array();
private static $_instance = null;
public static function getInstance()
{
if (!isset(self::$_instance)) {
self::$_instance = new self();
}
return self::$_instance;
}
private function __construct() {
$this->navigationArray = array();
}
public function popin($popInElement){
array_push($this->navigationArray,$popInElement);
}
public function displayLinks()
{
//print array
}
}
In boostrap I did following
$nlinks=myApp_Helper_Breadcrum::getInstance();
Zend_Registry::set('nlinks',$nlinks);
Now in my controller I am calling as follow
$nlinks= Zend_Registry::get('nlinks');
$nlinks->popin('Home');
$nlinks->displayLinks();
The problem is, even if this class is singleton the constructor is called again and again which makes my array to initialize. what I am trying to achieve is to keep pushing the items in the navigation array as I navigate the site.
Any idea why it is like this in ZF?

PHP isn't running like Java would where you have a JVM to maintain the state of your classes. In Java you can have a singleton behave exactly as you describe, but in PHP all the classes are refreshed with each subsequent call to the web server. So your singleton will stay in place for the duration of that call to the server, but once the response is sent then you start over again on the next call.
If you want to maintain state through successive calls you need to use the $_SESSION to keep track of your state.
EDIT:
My answer above deals with PHP in general and not the Zend Framework specifically. See my comment below.

Try to define your component as below:
class MyApp_Helper_Breadcrum
{
private static $_instance = null; // use private here
public static function getInstance()
{
if (self::$_instance === null) { // use strictly equal to null
self::$_instance = new self();
}
return self::$_instance;
}
private function __construct() // use private here
{
// ...
}
// ...
}

I ran into the exact same problem.
The problem is that the persistence of your classes are on the request scope.
And with zend, you can even have multiple requests for a page load.
PHP is a shared nothing architecture; each
request starts in a new process, and at the end of the request, it's all
thrown away. Persisting across requests simply cannot happen -- unless
you do your own caching. You can serialize objects and restore them --
but pragmatically, in most cases you'll get very little benefit from
this (and often run into all sorts of issues, particularly when it comes
to resource handles).
You may want to use Zend_cache, for persistence

Even though this is old, I would like to add my 2 cent.
Zend DOES NOT create a singleton, that persists across multiple requests. Regardless of the interpretation of the ZF documentation, on each request, the whole stack is re-initialized.
This is where your problem comes from. Since bootstrapping is done on each request, each request also re-initializes your helper method. As far as I know, helpers in ZF 1.x CAN'T be singletons.
The only way I see this being implementes ar you want it to be, is using sessions.

Related

GWT / JSNI - "DataCloneError - An object could not be cloned" - how do I debug?

I am attempting to call out to parallels.js via JSNI. Parallels provides a nice API around web workers, and I wrote some lightweight wrapper code which provides a more convenient interface to workers from GWT than Elemental. However I'm getting an error which has me stumped:
com.google.gwt.core.client.JavaScriptException: (DataCloneError) #io.mywrapper.workers.Parallel::runParallel([Ljava/lang/String;Lcom/google/gwt/core/client/JavaScriptObject;Lcom/google/gwt/core/client/JavaScriptObject;)([Java object: [Ljava.lang.String;#1922352522, JavaScript object(3006), JavaScript object(3008)]): An object could not be cloned.
This comes from, in hosted mode:
at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:249) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:136) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:571) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeVoid(ModuleSpace.java:299) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeVoid(JavaScriptHost.java:107) at io.mywrapper.workers.Parallel.runParallel(Parallel.java)
Here's my code:
Example client call to create a worker:
Workers.spawnWorker(new String[]{"hello"}, new Worker() {
#Override
public String[] work(String[] data) {
return data;
}
#Override
public void done(String[] data) {
int i = data.length;
}
});
The API that provides a general interface:
public class Workers {
public static void spawnWorker(String[] data, Worker worker) {
Parallel.runParallel(data, workFunction(worker), callbackFunction(worker));
}
/**
* Create a reference to the work function.
*/
public static native JavaScriptObject workFunction(Worker worker) /*-{
return worker == null ? null : $entry(function(x) {
worker.#io.mywrapper.workers.Worker::work([Ljava/lang/String;)(x);
});
}-*/;
/**
* Create a reference to the done function.
*/
public static native JavaScriptObject callbackFunction(Worker worker) /*-{
return worker == null ? null : $entry(function(x) {
worker.#io.mywrapper.workers.Worker::done([Ljava/lang/String;)(x);
});
}-*/;
}
Worker:
public interface Worker extends Serializable {
/**
* Called to perform the work.
* #param data
* #return
*/
public String[] work(String[] data);
/**
* Called with the result of the work.
* #param data
*/
public void done(String[] data);
}
And finally the Parallels wrapper:
public class Parallel {
/**
* #param data Data to be passed to the function
* #param work Function to perform the work, given the data
* #param callback Function to be called with result
* #return
*/
public static native void runParallel(String[] data, JavaScriptObject work, JavaScriptObject callback) /*-{
var p = new $wnd.Parallel(data);
p.spawn(work).then(callback);
}-*/;
}
What's causing this?
The JSNI docs say, regarding arrays:
opaque value that can only be passed back into Java code
This is quite terse, but ultimately my arrays are passed back into Java code, so I assume these are OK.
EDIT - ok, bad assumption. The arrays, despite only ostensibly being passed back to Java code, are causing the error (which is strange, because there's very little googleability on DataCloneError.) Changing them to String works; however, String isn't sufficient for my needs here. Looks like objects face the same kinds of issues as arrays do; I saw Thomas' reference to JSArrayUtils in another StackOverflow thread, but I can't figure out how to call it with an array of strings (it wants an array of JavaScriptObjects as input for non-primitive types, which does me no good.) Is there a neat way out of this?
EDIT 2 - Changed to use JSArrayString wherever I was using String[]. New issue; no stacktrace this time, but in the console I get the error: Uncaught ReferenceError: __gwt_makeJavaInvoke is not defined. When I click on the url to the generated script in developer tools, I get this snippet:
self.onmessage = function(e) {self.postMessage((function (){
try {
return __gwt_makeJavaInvoke(3)(null, 65626, jsFunction, this, arguments);
}
catch (e) {
throw e;
}
})(e.data))}
I see that _gwt_makeJavaInvoke is part of the JSNI class; so why would it not be found?
You can find working example of GWT and WebWorkers here: https://github.com/tomekziel/gwtwwlinker/
This is a preliminary work, but using this pattern I was able to pass GWT objects to and from webworker using serialization provided by AutoBeanFactory.
If you never use dev mode it is currently safe to pretend that a Java String[] is a JS array with strings in it. This will break in dev mode since arrays have to be usable in Java and Strings are treated specially, and may break in the future if the compiler optimizes arrays differently.
Cases where this could go wrong in the future:
The semantics of Java arrays and JavaScript arrays are different - Java arrays cannot be resized, and are initialized with specific values based on the component type (the data in the array). Since you are writing Java code, the compiler could conceivable make assumptions based on details about how you create and use that array that could be broken by JS code that doesn't know to never modify the array.
Some arrays of primitive types could be optimized into TypedArrays in JavaScript, more closely following Java semantics in terms of resizing and Java behavior in terms of allocation. This would be a performance boost as well, but could break any use of int[], double[], etc.
Instead, you should copy your data into a JsArrayString, or just use the js array to hold the data rather than going back and forth, depending on your use case. The various JsArray types can be resized and already exist as JavaScript objects that outside JS can understand and work with.
Reply to EDIT 2:
At a guess, the parallel.js script is trying to run your code from another scope such a in the webworker (that's the point of the code, right) where your GWT code isn't present. As such, it can't call the makeJavaInvoke which is the bridge back into dev mode (would be a different failure with compiled JS). According to http://adambom.github.io/parallel.js/ there are specific requirements that a passed callback must meet to be passed in to spawn and perhaps then - your anonymous functions definitely do not meet them, and it may not be possible to maintain java semantics.
Before I get much deeper, check out this answer I did a while ago addressing the basic issues with webworkers and gwt/java: https://stackoverflow.com/a/11376059/860630
As noted there, WebWorkers are effectively new processes, with no shared code or shared state with the original process. The Parallel.js code attempts to paper over this with a little bit of trickery - shared state is only available in the form of the contents passed in to the original Parallel constructor, but you are attempting to pass in instances of 'java' objects and calling methods on them. Those Java instances come with their own state, and potentially can link back to the rest of the Java app by fields in the Worker instance. If I were implementing Worker and doing something that referenced other data than what was passed in, then I would be seeing further bizarre failures.
So the functions you pass in must be completely standalone - they must not refer to external code in any way, since then the function can't be passed off to the webworker, or to several webworkers, each unaware of each other's existence. See https://github.com/adambom/parallel.js/issues/32 for example:
That's not possible since it would
require a shared state across workers
require us to transmit all scope variables (I don't think there's even a possibility to read the available scopes)
The only thing which might be possible would be cache variables, but these can already be defined in the function itself with spawn() and don't make any sense in map (because there's no shared state).
Without being actually familiar with how parallel.js is implemented (all of this answer so far is reading the docs and a quick google search for "parallel.js shared state", plus having experiemented with WebWorkers for a day or so and deciding that my present problem wasn't yet worth the bother), I would guess that then is unrestricted, and you can you pass it whatever you like, but spawn, map, and reduce must be written in such a way that their JS can be passed off to the new JS process and completely stand alone there.
This may be possible from your normal Java code when compiled, provided you have just one implementation of Worker and that impl never uses state other than what is directly passed in. In that case the compiler should rewrite your methods to be static so that they are safe to use in this context. However, that doesn't make for a very useful library, as it seems you are trying to achieve. With that in mind, you could keep your worker code in JSNI to ensure that you follow the parallel.js rules.
Finally, and against the normal GWT rules, avoid $entry for calls you expect to happen in other contexts, since those workers have no access to the normal exception handling and scheduling that $entry enables.
(and finally finally, this is probably still possible if you are very careful at writing Worker implementations and write a Generator that invokes each worker implementation in very specific ways to make sure that com.google.gwt.dev.jjs.impl.MakeCallsStatic and com.google.gwt.dev.jjs.impl.Pruner can correctly act to knock out the this in those instance methods once they've been rewritten as JS functions. I think the cleanest way to do this is to emit the JSNI in the generator itself, call a static method written in real Java, and from that static method call the specific instance method that does the heavy lifting for spawn, etc.)

Enabling caching for GWT RPC asynchronous calls

I'm thinking of introducing some kind of caching mechanism (like HTML5 local storage) to avoid frequent RPC calls whenever possible. I would like to get feedback on how caching can be introduced in the below piece of code without changing much of the architecture (like using gwt-dispatch).
void getData() {
/* Loading indicator code skipped */
/* Below is a gwt-maven plugin generated singleton for SomeServiceAsync */
SomeServiceAsync.Util.getInstance().getDataBySearchCriteria(searchCriteria, new AsyncCallback<List<MyData>>() {
public void onFailure(Throwable caught) {
/* Loading indicator code skipped */
Window.alert("Problem : " + caught.getMessage());
}
public void onSuccess(List<MyData> dataList) {
/* Loading indicator code skipped */
}
});
}
One way I can think of to deal with this is to have a custom MyAsyncCallback class defining onSuccess/onFailure methods and then do something like this -
void getData() {
AsyncCallback<List<MyData>> callback = new MyAsyncCallback<List<MyData>>;
// Check if data is present in cache
if(cacheIsPresent)
callback.onSuccess(dataRetrievedFromCache);
else
// Call RPC and same as above and of course, update cache wherever appropriate
}
Apart from this, I had one more question. What is the maximum size of storage available for LocalStorage for popular browsers and how do the browsers manage the LocalStorage for different applications / URLs? Any pointers will be appreciated.
I suggest to add a delegate class which handles the caching. The delegate class could look like this:
public class Delegate {
private static SomeServiceAsync service = SomeServiceAsync.Util.getInstance();
private List<MyData> data;
public static void getData(Callback callback) {
if (date != null) {
callback.onSuccess(data);
} else {
service.getData(new Callback() {
public onSuccess(List<MyData> result) {
data = result;
callback.onSuccess(result);
});
}
}
}
Of course this is a crude sample, you have to refine the code to make it reliable.
I did take too long to decide on using hash maps to cache results.
My strategy was not to use a singleton hashmap, but a singleton common objects class storing static instances of cache. I did not see the reason to load a single hashmap with excessive levels of hashtree branching.
Reduce the amount of hash resolution
If I know that the objects I am dealing with is Employee, Address, Project, I would create three static hashes
final static private Map<Long, Employee> employeeCache =
new HashMap<Long, Employee>();
final static private Map<Long, Address> addressCache =
new HashMap<Long, Address>();
final static private Map<String name, Project> projectCache =
new HashMap<String name, Project>();
public static void putEmployee(Long id, Employee emp){
employeeCache.put(id, emp);
}
public static Employee getEmployee(Long id){
return employeeCache.get(id);
}
public static void putEmployee(Long id, Address addr){
addressCache.put(id, addr);
}
public static Address getEmployee(Long id){
return addressCache.get(id);
}
public static void putProject(String name, Address addr){
projectCache.put(name, addr);
}
public static Address getProject(String name){
return projectCache.get(name);
}
Putting it all in a single map would be hairy. The principle of efficient access and storage of data is - the more information you have determined about the data, the more you should exploit segregating that data using that information you have. It would reduce the levels of hash resolution required to access the data. Not to mention all the risky and indefinite type casting that would need to be done.
Avoid hashing if you can
If you know that you always have a single value of CurrentEmployee and NextEmployee,
avoid storing them in the hash of Employee. Just create static instances
Employee CurrentEmployee, NextEmployee;
That would avoid needing any hash resolution at all.
Avoid contaminating the global namespace
And if possible, keep them as class instances rather than static instances, to avoid contaminating the global namespace.
Why avoid contaminating the global namespace? Because, more than one class would inadvertently use the same name causing untold number of bugs due to global namespace confusion.
Keep the cache nearest to where it is expected or used
If possible, if the cache is mainly for a certain class, keep the cache as a class instance within that class. And provide an eventbus event for any rare instance that another class would need to get data from that cache.
So that you would have an expectable pattern
ZZZManager.getZZZ(id);
Finalise the cache if possible,
otherwise/and privatise it by providing putters and getters. Do not allow another class to inadvertently re-instantiate the cache, especially if one day your class becomes a general utility library. Also putters and getters have the opportunity to validate the request to avoid a request from wiping out the cache or pushing the app into an Exception by directly presenting the cache with keys or values the cache is unable to handle.
Translating these principles into Javascript local storage
The GWT page says
Judicious use of naming conventions can help with processing storage data. For example, in a web app named MyWebApp, key-value data associated with rows in a UI table named Stock could have key names prefixed with MyWebApp.Stock.
Therefore, supplementing the HashMap in your class, with rather crude code,
public class EmployeePresenter {
Storage empStore = Storage.getLocalStorageIfSupported();
HashMap<Long, Employee> employeeCache;
public EmployeePresenter(){
if (empStore==null) {
employeeCache = new HashMap<Employee>();
}
}
private String getPrefix(){
return this.getClass()+".Employee";
//return this.getClass().getCanonicalName()+".Employee";
}
public Employee putEmployee(Long id, Employee employee)
if (empStore==null) {
stockStore.setItem(getPrefix()+id, jsonEncode(employee));
return;
}
employeeCache.put(id, employee);
}
public Employee getEmployee(Long id)
if (empStore==null) {
return (Employee) jsonDecode(Employee.class, stockStore.getItem(getPrefix()+id));
}
return employeeCache(id);
}
}
Since, the localstore is string based only, I am presuming that you will be writing your own json encoder decoder. On the other hand, why not write the json directly into the store the moment you receive it from the callback?
Memory constraints?
I cannot profess expertise in this question but I predict the answer for hashmaps to be the maximum memory constrained by the OS on the browser. Minus all the memory that is already consumed by the browser, plugins and javascript, etc, etc overhead.
For HTML5 local storage the GWT page says
"LocalStorage: 5MB per app per browser. According to the HTML5 spec, this limit can be increased by the user when needed; however, only a few browsers support this."
"SessionStorage: Limited only by system memory"
Since you are using gwt-dispath an easy solution here is to cache the gwt-dispatch Response objects agains the Request objects as a key in a Map. Its easy to implement and type agnostic. You will need to override Request - equals() method to see if the Request is already in the cache. If yes return Response from cache otherwise hit the server with a call.
IMO - LocalStorage is not a necessity here if all you need is in session cache for performance. Local Storage only a must for offline apps.
You may look into this - http://turbomanage.wordpress.com/2010/07/12/caching-batching-dispatcher-for-gwt-dispatch/

Zend - Design Pattern DataMapper & Table Gateway

This is directly out of the Zend Quick Start guide. My question is: why would you need the setDbTable() method when the getDbTable() method assigns a default Zend_Db_Table object? If you know this mapper uses a particular table, why even offer the possibility of potentially using the "wrong" table via setDbTable()? What flexibility do you gain by being able to set the table if the rest of the code (find(), fetchAll() etc.) is specific to Guestbook?
class Application_Model_GuestbookMapper
{
protected $_dbTable;
public function setDbTable($dbTable)
{
if (is_string($dbTable)) {
$dbTable = new $dbTable();
}
if (!$dbTable instanceof Zend_Db_Table_Abstract) {
throw new Exception('Invalid table data gateway provided');
}
$this->_dbTable = $dbTable;
return $this;
}
public function getDbTable()
{
if (null === $this->_dbTable) {
$this->setDbTable('Application_Model_DbTable_Guestbook');
}
return $this->_dbTable;
}
... GUESTBOOK SPECIFIC CODE ...
}
class Application_Model_DbTable_Guestbook extends Zend_Db_Table_Abstract
{
protected $_name = 'guestbook_table';
}
Phil is correct, this is known as lazy-loading design pattern. I just implemented this pattern in a recent project, because of these benefits:
When I call on getMember() method, I will get a return value, regardless if it has been set before or not. This is great for method chaining: $this->getCar()->getTires()->getSize();
This pattern offers flexibility in that outside calling code is still able to set member values: $myClass->setCar(new Car());
-- EDIT --
Use caution when implementing the lazy-loading design pattern. If your objects are not properly hydrated, a query will be issued for every piece of data which is NOT available. The best thing to do is tail your db query log, during the dev phase, to ensure the number and type of queries are what you expect. A project I was working on was issuing over 27 queries for a "detail" page, and I had no idea until I saw the queries.
This method is called lazy-loading. It allows a property to remain null until requested unless it is set earlier.
One use for setDbTable() would be testing. This way you could set a mock DB table or something like that.
One addition: if setDbTable() is solely for lazy-loading, wouldn't it make more sense to make it private? That way it will avoid accidental assignment and to wrong table as originally mentioned by Sam.
Should we be compromising the design for the sake of testability?

Creating an Autofac Lifetimescope that will expire with time

I have a bank/collection which caches instances of objects in memory so that each request doesn't need to go back to the datastore. I'd like Autofac to provide an instance of this bank, but then expire it after x seconds, so that a new instance is created on the next request. I'm having trouble getting my head around setting up a LifetimeScope to achieve this. I've read through this a couple of times. The Bank object is not really subject to a unit of work. It will ideally reside 'above' all units of work, caching objects within and across them.
I'm currently using the approach below, however it isn't working as I'd hoped.
Can someone please point me in the right direction?
....
builder.Register(c =>
{
return new ORMapBank(c.Resolve<IORMapRoot>());
}).InstancePerMatchingLifetimeScope(ExpireTimeTag.Tag());
IContainer container = builder.Build();
var TimedCache= RootScope.BeginLifetimeScope(ExpireTimeTag.Tag());
DependencyResolver.SetResolver(new AutofacDependencyResolver(TimedCache));
....
public static class ExpireTimeTag
{
static DateTime d = DateTime.Now;
static Object tag = new Object();
public static object Tag()
{
if (d.AddSeconds(10) < DateTime.Now)
{
CreateTag();
return tag;
}
private static void CreateTag()
{
tag = new Object();
}
}
Thanks very much in advance.
It is common to use a caching decorator to achieve this kind of behaviour. Assuming your IORMapRoot is responsible for getting the data in question (but it would work the same if ORMapBank) you do the following:
Create a new type, CachingORMapRoot that implements IORMapRoot
Add a constructor that takes the expiry TimeSpan and an instance of the original IORMapRoot implementation.
Implement the members to call the underlying instance and then cache the results accordingly for subsequent calls (implementation will vary on your cache technology).
Register this type in the container as IORMapRoot
This is a very clean way to implement such caching. It also makes it easy to switch between cached and non-cached implementations.

Class design: file conversion logic and class design

This is pretty basic, but sort of a generic issue so I want to hear what people's thoughts are. I have a situation where I need to take an existing MSI file and update it with a few standard modifications and spit out a new MSI file (duplication of old file with changes).
I started writing this with a few public methods and a basic input path for the original MSI. The thing is, for this to work properly, a strict path of calls has to be followed from the caller:
var custom = CustomPackage(sourcemsipath);
custom.Duplicate(targetmsipath);
custom.Upgrade();
custom.Save();
custom.WriteSmsXmlFile(targetxmlpath);
Would it be better to put all the conversion logic as part of the constructor instead of making them available as public methods? (in order to avoid having the caller have to know what the "proper order" is):
var custom = CustomPackage(sourcemsipath, targetmsipath); // saves converted msi
custom.WriteSmsXmlFile(targetxmlpath); // saves optional xml for sms
The constructor would then directly duplicate the MSI file, upgrade it and save it to the target location. The "WriteSmsXmlFile is still a public method since it is not always required.
Personally I don't like to have the constructor actually "do stuff" - I prefer to be able to call public methods, but it seems wrong to assume that the caller should know the proper order of calls?
An alternative would be to duplicate the file first, and then pass the duplicated file to the constructor - but it seems better to have the class do this on its own.
Maybe I got it all backwards and need two classes instead: SourcePackage, TargetPackage and pass the SourcePackage into the constructor of the TargetPackage?
I'd go with your first thought: put all of the conversion logic into one place. No reason to expose that sequence to users.
Incidentally, I agree with you about not putting actions into a constructor. I'd probably not do this in the constructor, and instead do it in a separate converter method, but that's personal taste.
It may be just me, but the thought of a constructor doing all these things makes me shiver. But why not provide a static method, which does all this:
public class CustomPackage
{
private CustomPackage(String sourcePath)
{
...
}
public static CustomPackage Create(String sourcePath, String targetPath)
{
var custom = CustomPackage(sourcePath);
custom.Duplicate(targetPath);
custom.Upgrade();
custom.Save();
return custom;
}
}
The actual advantage of this method is, that you won't have to give out an instance of CustomPackage unless the conversion process actually succeeded (safe of the optional parts).
Edit In C#, this factory method can even be used (by using delegates) as a "true" factory according to the Factory Pattern:
public interface ICustomizedPackage
{
...
}
public class CustomPackage: ICustomizedPackage
{
...
}
public class Consumer
{
public delegate ICustomizedPackage Factory(String,String);
private Factory factory;
public Consumer(Factory factory)
{
this.factory = factory;
}
private ICustomizedPackage CreatePackage()
{
return factory.Invoke(..., ...);
}
...
}
and later:
new Consumer(CustomPackage.Create);
You're right to think that the constructor shouldn't do any more work than to simply initialize the object.
Sounds to me like what you need is a Convert(targetmsipath) function that wraps the calls to Duplicate, Upgrade and Save, thereby removing the need for the caller to know the correct order of operations, while at the same time keeping the logic out of the constructor.
You can also overload it to include a targetxmlpath parameter that, when supplied, also calls the WriteSmsXmlFile function. That way all the related operations are called from the same function on the caller's side and the order of operations is always correct.
In such situations I typicaly use the following design:
var task = new Task(src, dst); // required params goes to constructor
task.Progress = ProgressHandler; // optional params setup
task.Run();
I think there are service-oriented ways and object-oritented ways.
The service-oriented way would be to create series of filters that passes along an immutable data transfer object (entity).
var service1 = new Msi1Service();
var msi1 = service1.ReadFromFile(sourceMsiPath);
var service2 = new MsiCustomService();
var msi2 = service2.Convert(msi1);
service2.WriteToFile(msi2, targetMsiPath);
service2.WriteSmsXmlFile(msi2, targetXmlPath);
The object-oriented ways can use decorator pattern.
var decoratedMsi = new CustomMsiDecorator(new MsiFile(sourceMsiPath));
decoratedMsi.WriteToFile(targetMsiPath);
decoratedMsi.WriteSmsXmlFile(targetXmlPath);