So I have a schema that is well defined. The datastorage that backs it will allow for this request. (MongoDB).
Lets say I have a Users class:
class User
emailAddress
name
If I'm merging in data from another source (lets say a map/params, and I can properly identify the source.) My intention is to put the unused properties in a structure within the User class.
For example: If I'm importing a User from facebook, they're going to have all kinds of properties outside just the emailAddress, or the name. BUt I don't know how to deal with those yet.
My question is: How would I design a domain class so that it can handle all of this on the creation of the object? (I'm willing to put a tracer property in to signify the source) [I.e. adding [source: Facebook]]
The outputting class would look, and be serialized as such:
The info coming back from Facebook would be [name: Jim, email: bo#jim.com, friends:1000, level:42]. The resulting class would be:
class User
emailAddress : bo#jim.com
name: Jim
extraProperties: [Facebook, [friends:1000, level:42]]
What is the best way of going about this? Would it break the domain class model? Is expando something that would work here?
I think the best way to design your domain class would be to look into saving the additional user's properties (extraProperties) as a serialised 'document' type object. If you were to convert the sample Map you have into say, JSON/GSON or XML (Converters) and save this to your database as a document / large nvarchar, you then have the flexibility of different properties for each user source.
You could then add custom getters and setters to your domain object which would convert / slurp the document, and present it as a map to your controllers/services
String extraProperties
def setExtraProperties(def properties){
this.extraProperties = (properties as JSON)?.toString()
}
def getExtraPropertiesMap() {
def jsonSlurper = new JsonSlurper()
def extraProps = jsonSlurper.parseText(this.extraProperties)
return extraProps //you can then access this using map syntax, eg. extraProps.Facebook.friends
}
Related
While trying to assign an object to a view in my controller action I get the following message because this object is not persisted:
Could not serialize Domain Object Vendor\Extension\Domain\Model\Object. It is neither an Entity with identity properties set, nor a Value Object.
Is there any possibility to add this object to the view without creating a databaseentry?
The exception [InvalidArgumentValueException('Could not serialize Domain Object $className. It is neither an Entity with identity properties set, nor a Value Object.', 1260881688)][1] is thrown in the UriBuilder, thus when a model shall be used as argument for creating a link.
The instance of Vendor\Extension\Domain\Model\Object must either fulfill these requirements:
can be represented as array (is array or implements Iterator interface) OR
extends TYPO3\CMS\Extbase\DomainObject\AbstractDomainObject AND one of
extends TYPO3\CMS\Extbase\DomainObject\AbstractValueObject OR
having a valid uid, not null
Thus, if you instantiated the object directly in the controller, the uid property is not defined yet. This property is assigned if domain objects are fetched or added with a repository.
TypeConverters
TypeConverters allow to convert from a given identifier (some string representation, hash-value, ...) to a proper domain object. The following links show how to do that for the concept of an IBAN (International Bank Account Identifier).
IBAN model
TypeConverter to create object from string
TypeConverters have to registerd in ext_localconf.php like this:
\TYPO3\CMS\Extbase\Utility\ExtensionUtility::registerTypeConverter(
\H4ck3r31\BankAccountExample\Domain\Property\TypeConverter\IbanTypeConverter::class
);
The Iban object can be used then in your controller:
public function someAction(Iban $iban) { ... }
Use array representation of your object
Another alternative could be to assign an array representation of the domain object to the view and use that to fill the link arguments:
$this->view->assign('myObject', $object->toArray());
When invoking a controller action, the object is reconstituted from the submitted array keys and are used as properties - thus array keys and properties must have the same naming, or a persistence column mapping is defined.
public function someAction(MyObject $object) { ... }
In my previously mentioned bank account example it looks like this:
BankDto model
Controller action invocation
The term "Dto" is the abbreviation for "Data Transfer Object", thus it's not a real domain entity, does not have a proper UID and is just used to encapsulate information in a domain object when passing that to different components.
I want to build a role functionality for my application. So, I thought object would come in handy because I need a Singelton of all different roles.
Therefore, I have the following code:
trait Role {
def id: UUID
def name: String
}
object Admin extends Role {
val id = UUID.randomUUID()
val name = "admin"
}
object Pro extends Role {
val id = UUID.randomUUID()
val name = "pro"
}
However, after I persisted these roles in my database and restarted the application, I noticed that the id of the roles changed, meaning it's not the same role as I persisted them when I started the application in the first place. So, I would need to set the id if a role with the same name has already been stored in the database and set it to the Singelton object. I thought that I could use parameters to initialize the Admin and Pro object, but apparently this does not work.
How can this be done?
First, it is difficult to discuss the problem by only seeing this code, without knowing how you try to do the database persistence part.
Following your code, the id is initialised by calling randomUUID, so surely you get a new one with each start. System works as designed.
Second, I am not sure if we would agree about what a singleton is and what is the semantic of the two 'objects'.
To me it looks as if you indeed would like to have two different instances of the type Role, instead of one singleton type Admin and one different singleton type Pro, because the two differ only in the attributed values, not in structure.
A singleton object is already an object, indeed the sole object of its type. So the notion of setting its values from outside during some sort of construction, like you would do with classes during instantiation, is not really applicable here.
Take a look at the below code:
object Admin extends Role {
val id = getPersistedIDFromDatabase(name).getOrElse {
pesistID(name, UUID.randomUUID())
}
val name = "admin"
}
// getThePersistedIDFromDatabase => executes the `select` SQL query and returns an Optional ID, i.e., Some(id) if the admin already exists; Otherwise None.
Whenever you restart your application, its memory is wiped out. So it has no way to know about your previous ID.
The title may seems confusing, but it's not easy to describe the question in few words. Let me explain the situation:
We have a web application project, and a calculation engine project. The web application collect user input and use the engine to generate some result, and represent to user. Both user input, engine output and other data will be persisted to DB using JPA.
The engine input and output consist of objects in tree structure, example like:
Class InputA {
String attrA1;
List<InputB> inputBs;
}
Class InputB {
String attrB1;
List<InputC> inputCs;
}
Class InputC {
String attrC1;
}
The engine output is in similar style.
The web application project handle the data persistence using JPA. We need to persist the engine input and output, as well as some other data that related to the input and output. Such data can be seem as extra fields to certain class. For example:
We want to persist extra field, so it looks like:
Class InputBx extends InputB{
String attrBx1;
}
Class InputCx extends InputC{
String attrCx1;
}
In Java OO world, this works, we can store a list of InputBx in InputA, and store a list of InputCx in InputBx because of the inheritance.
But we meet trouble when using JPA to persist the extended objects.
First of all, it requires the engine project to make their class become JPA entities. The engine was working fine by itself, it accept correct input and generate correct output. It doesn't smell good to force their model to become JPA entities when another project try to persist the model.
Second, the JPA doesn't accept the inherited objects when using InputA as the entry. From JPA point of view, it only know that InputA contains a list of InputB, and not possible to persist/retrieve a list of InputBx in object of InputA.
When trying to solve this, we had come up 2 ideas, but neither one satisfied us:
idea 1:
Use composition instead inheritance, so we still persist the original InputA and it's tree structure include InputB and InputC:
Class InputBx{
String attrBx1;
InputB inputB;
}
Class InputCx{
String attrCx1;
InputC inputC;
}
So the original input object tree can be smoothly retrieved, and InputBx and InputCx objects needs to be retrieved using the InputB and InputC objects in the tree as references.
The good thing is that no matter what changes made to the structure of the original input class tree (such as change attribute name, add/remove attributes in the classes), the extended class InputBx and InputCx and their attributes automatically synchronized.
The drawback is that this structure increases the calls to the database, and the model is not easy to use in the application(both back end and front end). Whenever we want related information of InputB or InputC, we need to manually code to search the corresponding object of InputBx and InputCx.
idea 2:
Manually make mirror classes to form a similar structure of the original input classes. So we created:
Class InputAx {
String attrA1;
List<InputBx> inputBs;
}
Class InputBx {
String attrB1;
List<InputCx> inputCs;
String attrBx1;
}
Class InputCx {
String attrC1;
String attrCx1;
}
We could use this as model of the web application, and the JPA entities as well. Here's what we could get:
Now the engine project can be set free, it doesn't need to bind to how the other projects persist these input/output objects. The engine project is independant now.
The JPA persistence works just fluent, no extra calls to database is required
The back end and front end UI just use this model to get both original input objects and related information with no effort. When trying use engine to perform calculation, we can use a mapping mechanism to transfer between the original objects and extended objects.
The drawback is also obvious:
There is duplication in the class structure, which is not desired from the OO point of view.
When considering it as DTO to reduce the database calls, it can be claimed as anti-pattern when using DTO in local transfer.
The structure is not automatically synchronized with the original model. So if there are any changes made to the original model, we need to manually update this model as well. If some developers forget to do this, there will be some not-easy-to-find defects.
I'm looking for the following help:
Is there any existing good/best practices or patterns to solve similar situation we meet? Or any anti-patterns that we should try to avoid? References to web articles are welcome.
If possible, can you comment on the idea 1 and idea 2, from the aspect of OO design, Persistence practices, your experience, ect.
I will be grateful for your help.
I am trying to implement a distributed cache with spring-memcached. The docs suggest that
to use an object as the key I need to have a method in my domain class with #CacheKeyMethod
annotation on it.
But the problem is I am using the same domain class in different scenarios and the key to be generated in each case has different logic. For examples for a User class one of the scenarios requires the key to be unique in terms of city and gender and but in the other case it requires to be unique in terms of the user's email, it's essentially what your lookup is based on.
Although a user's email would determine the city and gender, so I can use email as the key in first case as well but that would mean separate cache entries for each user while the cached data would be same as long as the gender and city are same, which is expected to increase the hit ratio by a huge margin(just think how many users you can expect to be males from bangalore).
Is there a way I could define different keys. Also it would be great if the logic of
generating the key could be externalised from the domain class itself.
I read the docs and figured out that something called CacheKeyBuilder and/or CacheKeyBuilderImpl could do the trick but I couldn't understand how to proceed.
Edit..
ok.. I got one clue! What CacheKeyBuliderImpl does is, it calls the generateKey method on defaultKeyProvider instance which looks for #cachekeyannotation on the provided domain class's methods and executes the method found to obtain the key.
So replacing either the CacheKeyBuilderImpl with custom Impl or replacing KeyProvider's default implementation within CacheKeyBuilderImpl with yours might do the trick... but the keyprovider reference is hardwired to DefaultKeyProvider.
Can anybody help me implement CacheKeyBuilder(with respect to what different methods do;the documentation doesn't clarify it) and also how do I inject it to be used instead of ususal CacheKeyBuilderImpl
Simple Spring Memcached (SSM) hasn't be designed to allow such low level customization. As you wrote one of way is to replace CacheKeyBuilderImpl with own implementation. The default implementation is hardwired but it can be easily replaces using custom simplesm-context.xml configuration.
As I understand your question, you want to cache your User objects under different keys depends on use case. It's supported out of the box because by default SSM uses method argument to generate cache key not the result.
Example:
#ReadThroughMultiCache(namespace = "userslist.cityandgenre", expiration = 3600
public List<User> getByCityAndGenre(#ParameterValueKeyProvider(order = 0) String city, #ParameterValueKeyProvider(order = 1) String genre) {
// implementation
}
#ReadThroughSingleCache(namespace = "users", expiration = 3600)
public User getByEmail(#ParameterValueKeyProvider String email) {
// implementation
}
In general the #CacheKeyMethod is only used to generate cache key if object that contains the method is passed as a parameter to the method and the parameter is annotated by #ParameterValueKeyProvider
I'm creating a simple ORM in Zend Framework, to roughly encapsulate a public library application, using the DbTable/Mapper/Model approach. I'm not sure if the way I'm doing my User-related classes is right, though, as I have some logic in Mapper_User, and some in Model_User.
Mapper_User
<?php
class Mapper_Users {
/*
createModelObject would be called by a Controller handling a Form_Regsiter's
data, to create a new Model_User object. This object'd then be saved by the
same Controller by calling Mapper_Users->save();
*/
public function createModelObject(array $fields) {
if(!isset($fields['date_registered']))
$fields['date_registered'] = date('Y-m-d H:i:s');
if(!isset($fields['max_concurrent_rentals']))
$fields['max_concurrent_rentals'] = 3;
return new Model_User($fields);
}
}
?>
In the method which creates new Model_User objects from scratch (as in, not pulling a record from the DB, but registering a new user), I instantiate a new Model_User with the name/username/password provided from a Form, then set a few object properties such as the registration date, "max books allowed at one time" and such. This data, being stuffed inside the Model_User by the Mapper_User, then gets written to the DB when Mapper_User->save(); gets called. The Mapper feels like the right place for this to go - keeps the Model light.
Is this right, or should default fields like this be set inside Model_User itself?
Model_User
<?php
class Model_User {
public function setPassword($value) {
$this->password = md5($value);
}
}
?>
When setting a user object's password, I'm doing this in Model_User->setPassword($value);, as you might expect, and doing $this->password = md5($value); inside this method. Again, this feels right - trying to do the md5 step in Mapper_User->save(); method would cause issues if the Model_User were one pulled from the DB, as the password field would clearly already be hashed.
And this is where my confusion's arising. To my mind, all the logic pertaining to "fields to do with a user" should either live in its Model, or its Mapper, but here I have some logic (default fields) in the Mapper, and some (field operations) in the Model. Is this right, or should I be trying to somehow get default fields in the Model, or field operations in the Mapper?
Thanks for taking the time to read this!
Edit for #RockyFord:
Mapper_User actually extends an Abstract I've written, as I don't like writing the same basic code in 500 Mapper_*.php files, so there's some bureaucracy due to that, but its effective __construct() is pretty simple:
<?php
class Mapper_Users {
public function __construct() {
$this->_db = new DbTable_Users();
if(!$this->_db instanceof Zend_Db_Table_Abstract)
throw new Exception('Invalid table data gateway provided');
}
}
?>
The DataMapper is responsible for populating the object with its data, as well as persisting it. It seems like you're mixing things when you call $user->save() because you're putting persistence logic within your domain object. This is a common approach when you're using the ActiveRecord pattern instead of DataMappers, which is a bad thing.
Your DataMapper should be responsible for saving the object $mapper->save($user); and it needs to update just the changed properties. So, the password will be updated only if you set the new hash.
UPDATE:
You said:
[...] trying to do the md5 step in Mapper_User->save(); method would cause
issues if the Model_User were one pulled from the DB, as the password
field would clearly already be hashed.
Creates a method called setPasswordHash() and use it when pulling from the database.
Remember: Don't look for things!
Instead of looking for the database inside your mappers, you should ask for it.
public __construct(Zend_Db_Table $dbTable) {
$this->dbTable = $dbTable;
}
It's all about Dependency Injection.
This may take awhile to answer completely but I'll start with the setPassword question.
your current:
public function setPassword($value) {
$this->password = md5($value);
}
Now this has nothing to do with convention or best practice but practicality.
ask yourself:
What happens when you retrieve a database record for your user object and that database record contains a hashed password?
Answer: When you construct the user object and call $this->setPassword($password); or equivalent, you will be applying the hash to a hash.
So you are almost obligated to hash the password in the mapper's save() method or the method used to update the password. Think of the hash value in the database table as the password and the value that's typed into the form field as a placeholder for that password.
Next Part:
To my mind, all the logic pertaining to "fields to do with a user" should either live in its Model, or its Mapper
This is mostly correct.
Everything that belongs to the object domain (Model_User) shall be addressed in the domain Model class (Model_User).
Mappers are only to translate (map) a data object (database row, json string, xml file, flat file, csv file ...) to a form that can instantiate a domain object (Model_User).
So you may end up with more then one mapper available for a given domain object or one mapper may map to more then one source of data.
It might help you if you stopped thinking of your data as "fields", which might tend to keep your head in the database, and instead think of your objects in terms of properties or characteristics.
Because when you get down to the most basic level a Model_User object is just:
class Model_User {
protected $id;
protected $name;
protected $password;
//continue....
}
all of the getters, setters, constructors and other methods are pretty much so we can put values into those variables.