VBA Excel Best Practices for using Forms and Classes together? - class

I am new to using classes and would like to learn smart ways of integrating them with forms. Specifically, when you enter data into a form, should that form be tightly linked with a class and how?
Let's use a simple example. John is a collector of all kinds of cards: baseball cards, Pokemon cards, etc. He keeps his card list database stored in Excel worksheets and manages them using forms. There are forms for entering new cards and modifying their status in his collection, and there are also forms and functions for analyzing his data.
So he might have a clsBaseball, a clsPokemon, a clsAlbum and a clsSalesRecord. He also has forms for entering and modifying data and other types of analysis forms that compare cards, calculate stats for various teams and time periods, etc.
When John clicks the "New Baseball Card" button, his frmBaseball pops up.
He enters baseball card data
He clicks Update
The form is validated
The data is saved onto the worksheet
At any point in the above process is clsBaseball used? I can see how the class would be used to load up all data in service of fancy sorting or statistical routines, but is it actually used during the entry phase such that the fields on the form have a direct effect on an instance of clsBaseball?

It depends on how robust your requirements are. If you're just prototyping something then tightly binding your code to your form is fine. If however you want to build a more flexible, robust and extensible application, then modelling your data structure in classes, encapsulating your functionality in objects is the way to go.
Classes enable you to model real world things and they facilitate their implementation in a logically separated tiered architecture. In your case your form provides the presentation layer, your classes provide the application tier, and your excel sheets provide the data tier (normally a database) - this is 3 tier.
So, in your example I would model your data in classes, as you have done, but then if I required functionality requiring all my clsBaseball objects, I would have managing classes for each object. These managing classes contain a collection of your clsBaseball objects and allow you to implement behaviour that requires a collection of objects. For example, you would implement your update method in clsBaseball, but you would implement CalculateStats in the clsBaseballs managing class, you could also call the update methods for all your clsBaseball objects by iterating over the collection of clsBaseball in (managing class) clsBaseballs.
In answer to your questions:
At any point in the above process is clsBaseball used?
1) He enters baseball card data:
You maybe instantiate an instance of clsBaseball, but perhaps not yet
2) He clicks update:
You instantiate a new instance of clsBaseball (or use the existing one), pass through the user entered values and call clsBaseball's update method. In this way you encapsulate the clsBaseball behaviour.
3) The form is validated:
This would occur before the clsBaseball updates the datastore. So you would probably have a validation method in clsBaseball that is called before any data is updated. Any errors can be passed back up to the presentation layer (your form). Or you could just validate in the form (though having any business logic in your presentation layer is frowned upon because you might want to switch out your presentation layer for something else, please see the MVC/MVP patterns).
4) The data is saved onto the worksheet:
This is done by clsBaseball by the update method.
As for your fancy sorting or statistical routines, you could get this data from your managing classes which, because they contain collections of your objects are perfectly setup to analyse all your clsBaseball instances. All of this functionality is nicely encapsulated, there is no repetition (DRY) and anyone coming newly to the code will be able to figure out what's going on.
I answered a question a little while ago re how to structure data in MS Access, the answer includes examples of a class and managing class: MS ACCESS 2003 triggers (Query Event), and Excel import

Related

Recursive model design

The software I am working on uses an API which roughly has this organization : (you might have to read it two times in order to resolve the symbols :) )
A scenario is a process which contains a set of intervals (duration) and events (time point).
An interval is defined by its beginning and end events, which specifies the time at which it starts and ends (hence its duration). An interval can hold an arbitrary number of processes (like a scenario).
An event is just a point in time.
Events can be placed on a graphical view in order to create a scenario.
As you can see, this model is recursive, since you can put a scenario in an interval, and another interval inside this scenario ad infinitum.
My question is : in a "view model" - "presenter" - "view" pattern, what should be the ownership relations of the API objects and the view model objects ? Should I let the API manage the ownership of its own model objects, like Event & Interval, or should I instantiate them when I instantiate the corresponding view model objects ? Is there a best practice ?
You should probably let the API manage its own domain objects, and in your project map those objects to custom Model or ViewModel objects as necessary
Whenever working with ViewModels, try to remember that MVVM or MVP is a pattern for your UI, not a pattern for business logic. Presenters should call other classes (which should be regarded as outside the MVVM/MVP/MVPVM pattern) to do their business logic. It sounds like the API you mention provides a lot of your business functionality; ideally, your models would be specific to your app and then you would map the API's objects to your model objects appropriately.
It is common and sometimes a mistake to use the domain objects (e.g. those provided by the API) as your Model, so be careful, for the instant that you need a property or attribute on your Model that isn't provided by the API object, you are stuck and confused. Be very willing to map the API's objects to your custom Model objects that exist only for your app or site.
When in doubt, get back to SOLID, especially the single-responsibility principle.
Hope I understood you rightly.

On observing an execution tree of interdependent models in MVC

I've developed on the Yii Framework for a while now (4 months), and so far I have encountered some issues with MVC that I want to share with experienced developers out there. I'll present these issues by listing their levels of complexity.
[Level 1] CR(create update) form. First off, we have a lot of forms. Each form itself is a model, so each has some validation rules, some attributes, and some operations to perform on the attributes. In a lot of cases, each of these forms does both updating and creating records in the db using a single active record object.
-> So at this level of complexity, a form has to
when opened,
be able to display the db-friendly data from the db in a human-friendly way
be able to display all the form fields with the attributes of the active record object. Adding, removing, altering columns from the db table has to affect the display of the form.
when saves, be able to format the human-friendly data to db-friendly data before getting the data
when validates, be able to perform basic validations enforced by the active record object, it also has to perform other validations to fulfill some business rules.
when validating fails, be able to roll back changes made to the attribute as well as changes made to the db, and present the user with their originally entered data.
[Level 2] Extended CR form. A form that can perform creation/update of records from different tables at once. Not just that, whether a form would create/update of one of its records can sometimes depend on other conditions (more business rules), so a form can sometimes update records at table A,B but not D, and sometimes update records at A,D but not B
-> So at this level of complexity, we see a form has to:
be able to satisfy [Level 1]
be able to conditionally create/update of certain records, conditionally create/update of certain columns of certain records.
[Level 3] The Tree of Models. The role of a form in an application is, in many ways, a port that let user's interact with your application. To satisfy requests, this port will interact with many other objects which, in turn, interact with many more objects. Some of these objects can be seen as models. Active Record is a model, but a Mailer can also be a model, so is a RobotArm. These models use one another to satisfy a user's request. Each model can perform their own operation and the whole tree has to be able to roll back any changes made in the case of error/failure.
Has anyone out there come across or been able to solve these problems?
I've come up with many stuffs like encapsulating model attributes in ModelAttribute objects to tackle their existence throughout tiers of client, server, and db.
I've also thought we should give the tree of models an Observer to observe and notify the observed models to rollback changes when errors occur. But what if multiple observers can exist, what if a node use its parent's observer but give its children another observers.
Engineers, developers, Rails, Yii, Zend, ASP, JavaEE, any MVC guys, please join this discussion for the sake of science.
--Update to teresko's response:---
#teresko I actually intended to incorporate the services into the execution inside a unit of work and have the Unit of work not worry about new/updated/deleted. Each object inside the unit of work will be responsible for its state and be required to implement their own commit() and rollback(). Once an error occur, the unit of work will rollback all changes from the newest registered object to the oldest registered object, since we're not only dealing with database, we can have mailers, publishers, etc. If otherwise, the tree executes successfully, we call commit() from the oldest registered object to the newest registered object. This way the mailer can save the mail and send it on commit.
Using data mapper is a great idea, but We still have to make sure columns in the db matches data mapper and domain object. Moreover, an extended CR form or a model that has its attributes depending on other models has to match their attributes in terms of validation and datatype. So maybe an attribute can be an object and shipped from model to model? An attribute can also tell if it's been modified, what validation should be performed on it, and how it can be human-friendly, application-friendly, and db-friendly. Any update to the db schema will affect this attribute, and, thereby throwing exceptions that requires developers to make changes to the system to satisfy this change.
The cause
The root of your problem is misuse of active record pattern. AR is meant for simple domain entities with only basic CRUD operations. When you start adding large amount of validation logic and relations between multiple tables, the pattern starts to break apart.
Active record, at its best, is a minor SRP violation, for the sake of simplicity. When you start piling on responsibilities, you start to incur severe penalties.
Solution(s)
Level 1:
The best option is the separate the business and storage logic. Most often it is done by using domain object and data mappers:
Domain objects (in other materials also known as business object or domain model objects) deal with validation and specific business rules and are completely unaware of, how (or even "if") data in them was stored and retrieved. They also let you have object that are not directly bound to a storage structures (like DB tables).
For example: you might have a LiveReport domain object, which represents current sales data. But it might have no specific table in DB. Instead it can be serviced by several mappers, that pool data from Memcache, SQL database and some external SOAP. And the LiveReport instance's logic is completely unrelated to storage.
Data mappers know where to put the information from domain objects, but they do not any validation or data integrity checks. Thought they can be able to handle exceptions that cone from low level storage abstractions, like violation of UNIQUE constraint.
Data mappers can also perform transaction, but, if a single transaction needs to be performed for multiple domain object, you should be looking to add Unit of Work (more about it lower).
In more advanced/complicated cases data mappers can interact and utilize DAOs and query builders. But this more for situation, when you aim to create an ORM-like functionality.
Each domain object can have multiple mappers, but each mapper should work only with specific class of domain objects (or a subclass of one, if your code adheres to LSP). You also should recognize that domain object and a collection of domain object are two separate things and should have separate mappers.
Also, each domain object can contain other domain objects, just like each data mapper can contain other mappers. But in case of mappers it is much more a matter of preference (I dislike it vehemently).
Another improvement, that could alleviate your current mess, would be to prevent application logic from leaking in the presentation layer (most often - controller). Instead you would largely benefit from using services, that contain the interaction between mappers and domain objects, thus creating a public-ish API for your model layer.
Basically, services you encapsulate complete segments of your model, that can (in real world - with minor effort and adjustments) be reused in different applications. For example: Recognition, Mailer or DocumentLibrary would all services.
Also, I think I should not, that not all services have to contain domain object and mappers. A quite good example would be the previously mentioned Mailer, which could be used either directly by controller, or (what's more likely) by another service.
Level 2:
If you stop using the active record pattern, this become quite simple problem: you need to make sure, that you save only data from those domain objects, which have actually changed since last save.
As I see it, there are two way to approach this:
Quick'n'Dirty
If something changed, just update it all ...
The way, that I prefer is to introduce a checksum variable in the domain object, which holds a hash from all the domain object's variables (of course, with the exception of checksum it self).
Each time the mapper is asked to save a domain object, it calls a method isDirty() on this domain object, which checks, if data has changed. Then mapper can act accordingly. This also, with some adjustments, can be used for object graphs (if they are not to extensive, in which case you might need to refactor anyway).
Also, if your domain object actually gets mapped to several tables (or even different forms of storage), it might be reasonable to have several checksums, for each set of variables. Since mapper are already written for specific classes of domain object, it would not strengthen the existing coupling.
For PHP you will find some code examples in this ansewer.
Note: if your implementation is using DAOs to isolate domain objects from data mappers, then the logic of checksum based verification, would be moved to the DAO.
Unit of Work
This is the "industry standard" for your problem and there is a whole chapter (11th) dealing with it in PoEAA book.
The basic idea is this, you create an instance, that acts like controller (in classical, not in MVC sense of the word) between you domain objects and data mappers.
Each time you alter or remove a domain object, you inform the Unit of Work about it. Each time you load data in a domain object, you ask Unit of Work to perform that task.
There are two ways to tell Unit of Work about the changes:
caller registration: object that performs the change also informs the Unit of Work
object registration: the changed object (usually from setter) informs the Unit of Work, that it was altered
When all the interaction with domain object has been completed, you call commit() method on the Unit of Work. It then finds the necessary mappers and store stores all the altered domain objects.
Level 3:
At this stage of complexity the only viable implementation is to use Unit of Work. It also would be responsible for initiating and committing the SQL transactions (if you are using SQL database), with the appropriate rollback clauses.
P.S.
Read the "Patterns of Enterprise Application Architecture" book. It's what you desperately need. It also would correct the misconception about MVC and MVC-inspired design patters, that you have acquired by using Rails-like frameworks.

Symfony2: Difference between those metodology "Embedded Form" and "Data Transformer"

Consider the following scenario
We have a simple database that involves two entities: user and category.
For our hypothesis let's say that a user can have only a type of category and a category can be associated with n users.
Now, consider a web page where a user - say ROLE_ADMINISTRATOR - could edit user table and associate them to a different one category.
As far I know (and I'm still new to symfony in general) if I use Doctrine and symfony2 in tandem, with - let's say - annotation method, i'll have two entity (php classes).
Embedded form
I will create a form that will show the user and, for show - and persist, of course! - also his category I "choose" to follow the "embedded form" strategy.
Having said that the entity has been created, i'll have to create a form for category (suppose that into formBuilder I'll add only id attribute of the category).
After that I have to add to formBuilder of UserType class the previous form and, with "some kind of magic" the form will render (after the appropriate operations) as magic and just as magically, when i'll post it (and bind, and so on) back all the informations will be persists onto database
Data Transformers
A.K.A. trasnform an input of a form into an object and vice versa.
In that way I'll have to define a - let's say - CategorySelectorType that into his builder, will add a Class (service ?) that will do those transformations.
Now we define the data transformer itself that will implement the DataTransofmerInterface (with his method and so on...)
The next step will be register that entity into services and add into UserType the form that will use this service.
So I don't understand any "strong" difference between those two metodology but "reusability" of the service. Somebody can offer me a different point of view and explain me the differences, if any?
A data transformer does not replace a embedded form, it rather enhances forms and wraps data transformation nicely.
The first sentence on the cookbook page about Data Transformers sums it up nicely:
You'll often find the need to transform the data the user entered in a
form into something else for use in your program.
In your example above, you could add a drop-down list of categories so the admin can select one for the given user. This would be done using a embedded form. As the category field is the id of an existing category, there is no need to transform the data.
For some reason you now want the admin to be able to enter a free text of the category. Now you need to do some tranformation of the text into the object in question. Maybe you want him to be able to either add a new or select a current category with this text field. Both is possible by using a data transformer which takes the text and searches for the category. Depending on your needs, not existing categories can be created and returned.
Another use case is when a user enters data which needs to be modified in some way before storing them. Let's say the user enters a street, house number and city but you want to store the coordinates instead.
In both cases it doesn't matter if you embed the form into another!
Could you do that in your controller? Of course. Is it a good idea to do such things in the controller? Probably not, as you have a hard time testing (while doing it in the transformer let you unit test the transformation nicely) or reusing it.
UPDATE
Of course it is possible to place the transformation code someplace else. The user object itself is not a good place, as the model should not know about the entity manager, which is needed to do the transformation.
The user type would be possible, but this means that it gets tied to the entity manager.
It all adds up to the very powerfull concept of Separation of concerns, which states that one class should only do one thing to make it maintainable, resuable, testable and so on. If you follow this concept, than it should be clear that data transformation is a thing on its own and should be threated as such. If you don't care, you may not need the transformation functionality.

Is core data implementing data mapper pattern?

I know that core data should not be considered as ORM but it still offers the functionality that is similar to ORM. Just curious, is it implementing data mapper pattern? I know "The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other." (Martin Fowler). IMHO context manager handles all SQL stuff into one transaction, so it's very performance wise design and IMHO core data might be considered implementing data mapper pattern.
One year latter, I will contribute with my two cents
I am not an ORM expert and just recently started something using a Data Mapper, but as a long time Core Data user I can say that no. The main objective of this pattern is having a clear cut of a domain object from all database related operations.
Once I start writing unit tests, the first thing I notice is that I must load a database, even if it is just some in memory store, but I do must load one. Also there are no mappers for each class, I have no control about how each relation is stored.
Core Data loads lots of meta information about your object graph and forces some structure to them. Although you can change the persistent store and bake something of your own, you will have lots of restrictions about how to do it, with a clear "relational" feeling to it.
The idea is good, we might say it is some variation of it. Something that I do love is that the save operation is done by the context, not the object itself. So there is some type of separation.
However look at those functions like "awakeFromFetch" or "didSave", both operations are related with the data store, not a plain domain object. A proper Data Mapper pattern would allow you to define those operations for each persistent store, not unified in a single object.
UPDATE:
Funny enough one day after my answer I had to deal with an old CoreData based project and must come back to improve this answer. To make things clear, I do consider that "seems like a pattern" is not enough. For example, implementation of the facade and adapter patterns is quite similar, but you name them differently depending on how you use them.
Is Core Data implementing data mapper?
I must say that my "not quite" should have been "definitely not!"
I have just been very angry because I needed to rename some fields and later add new ones. Although I do know quite well how auto-migrations work with Core Data I forgot how annoying these are.
How many times do you need some new field, rename something, experiment until you get it right.... and every single tiny change requires a full blown database migration? With Data Mappers this never happens because domain objects are perfectly decoupled. You only touch the database to catch up with the domain objects after you finish some new feature. Core Data forces you to bind at every single moment every single detail of your domain objects.
Boy, how sweet life was until I forgot that "tiny" annoyance of Core Data being the exact opposite of what you can achieve with data mappers.

Entity Framework and Encapsulation

I would like to experimentally apply an aspect of encapsulation that I read about once, where an entity object includes domains for its attributes, e.g. for its CostCentre property, it contains the list of valid cost centres. This way, when I open an edit form for an Extension, I only need pass the form one Extension object, where I normally access a CostCentre object when initialising the form.
This also applies where I have a list of Extensions bound to a grid (telerik RadGrid), and I handle an edit command on the grid. I want to create an edit form and pass it an Extension object, where now I pass the edit form an ExtensionID and create my object in the form.
What I'm actually asking here is for pointers to guidance on doing this this way, or the 'proper' way of achieving something similar to what I have described here.
It would depend on your data source. If you are retrieving the list of Cost Centers from a database, that would be one approach. If it's a short list of predetermined values (like Yes/No/Maybe So) then property attributes might do the trick. If it needs to be more configurable per-environment, then IoC or the Provider pattern would be the best choice.
I think your problem is similar to a custom ad-hoc search page we did on a previous project. We decorated our entity classes and properties with attributes that contained some predetermined 'pointers' to the lookup value methods, and their relationships. Then we created a single custom UI control (like your edit page described in your post) which used these attributes to generate the drop down and auto-completion text box lists by dynamically generating a LINQ expression, then executing it at run-time based on whatever the user was doing.
This was accomplished with basically three moving parts: A) the attributes on the data access objects B) the 'attribute facade' methods at the middle-tier compiling and generation dynamic LINQ expressions and C) the custom UI control that called our middle-tier service methods.
Sometimes plans like these backfire, but in our case it worked great. Decorating our objects with attributes, then creating a single path of logic gave us just enough power to do what we needed to do while minimizing the amount of code required, and completely eliminated any boilerplate. However, this approach was not very configurable. By compiling these attributes into the code, we tightly coupled our application to the datasource. On this particular project it wasn't a big deal because it was a clients internal system and it fit the project timeline. However, on a "real product" implementing the logic with the Provider pattern or using something like the Castle Projects IoC would have allowed us the same power with a great deal more configurability. The downside of this is there is more to manage, and more that can go wrong with deployments, etc.