How to validate multiple Typo3 action parameters in combination? - typo3

The Typo3 documentation describes #Validate annotations which can be used to validate parameters for a single controller action:
https://docs.typo3.org/m/typo3/book-extbasefluid/master/en-us/9-CrosscuttingConcerns/2-validating-domain-objects.html
However, it is only described how to add a custom validator for a single parameter. It is possible to add multiple validation annotations, but then again each of them can validate just a single parameter, not multiple parameters in combination.
First question: It is possible to add a validator which checks multiple parameters, or even all parameters, of a specific controller action?
Of course the obvious workaround is to combine multiple arguments in a single argument, using e.g. an array or an object. But this is especially annoying if the arguments themselves are already (independent) model objects.
Second question: If the answer to the first question is "it's impossible", what is the recommended way to combine the arguments of a controller action?
(e.g.: Should one use an array? That seems to be not preferred in Typo3 due to the lack of type safety and other features. Should one create a class? But which kind of class would that be? A Utility class? A Model class? But that model class would then need suppressed persistence? This seems to be all messy.)
I'm using version 9.5 of Typo3, but if things are different in version 10, that would be interesting as well.

To the best of my knowledge I suggest using a data transfer object (DTO) for this purpose.
If your models have to be validated in combination, but do not belong to any other entity, combining them in a DTO is probably the best way to go. Consequently the validation logic is then clustered in a single validator.
See also this blog post about DTOs: https://usetypo3.com/dtos-in-extbase.html

Related

Symfony2: where to put "pre-set" and "post-get" entity methods?

I have three different entity attributes which have to be "pre-parsed" before they get saved in the datebase.
Same attributes have to be "post-parsed" before being shown to users.
There are several different controllers actions which are setting/getting these attributes. Currently I preparse/postparse this attributes basicly in every of these methods.
How should I handle this? I was thinking about putting it directly into entity but that is not the place for that. Especially because I need the same pre-parse functions in a few entities.
Basically these function has to run before every setter and getter call.
if you have a t4 template that generates the model code, then it's relatively easy to change property setters/getters to do the data pre and post-processing.
You may want to look at Data Transformers - http://symfony.com/doc/current/cookbook/form/data_transformers.html
UPDATE:
Another, and probably the most appropriate, method would be to use Doctrine EventListener or EventSubscriber.
http://symfony.com/doc/current/cookbook/doctrine/event_listeners_subscribers.html
http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/events.html
In your case, you need to listen/subscribe to prePersist, preUpdate, and postLoad events.

ZF models correct use

I am struggling with how to understand the correct usage of models. Currently i use the inheritance of Db_Table directly and declare all the business logic there. I know it's not correct way to do this.
One solution would be to use Doctrine ORM, but this requires learning curve and all the current components what i use needs to be rewritten paginator and auth. Also Doctrine1 adds a another dozen classes which need to be loaded.
So the current cleanest implementation what i have seen is to use the Data Mapper classes between the so called model and DbTabel. I haven't yet implemented this as it seems to head writing another ORM. But example could be something this: SQL table User
create class with setters, getters, business logic here /model/User.php
data mapper /model/mapper/UserMapper.php, the funcionality is basically writing all the update, save actions in here.
the data source /model/DbTable/User.php extends the Db_Table_Abstract
Problems are with relationships between other models.
I have found it beneficial to not have my models extend Db_Table, but to use composition instead. That means my model 'has a' Db_Table rather than 'is a' Db_Table.
That way I find it much easier to reference multiple tables in the same model, which is a common requirement. This is enough for a simple project. I am currently developing a more complex application and have used the Data Mapper pattern and have found that it has simplified my code more than I would have believed.
Specifically, I have created a class which provides all access to the database and exposes methods such as getUser() etc.. That way, if the DB changes, or my client wants something daft like storing records in XML or we split the servers or something I only have to rewrite one class.
Again, my models do not extend this class, but have an instance of it assigned as a property during construction.
I would say the 'correct' way depends on the situation. Following the YAGNI and KISS principles, it is not good to over-complicate your model setup unless you really believe that it will benefit you in the long run.
What is the application you are developing? How is your current setup of extending Db_Table holding you back?

ASP.Net MVC2 Validate two ViewModels of the same class differently using DataAnnotations

I'm using DataAnnotations for validation of a custom class (LINQ to SQL auto generated) using the MetadataType tag on top of the class. I'm loving DataAnnotations and it works well in simple, common scenarios. E.g.
[MetadataType(typeof(Person_Validation))]
public class Person
But what if you need to have two different sets of validation rules applied to the class in different scenarios???
My situation: Some fields are mandatory on the www public-facing site, but not mandatory on the internal admin site. But both sites have a View which "Creates New" of the same object/class.
This is where it becomes DataAnnotations HELL surfaces..
I've tried using two different ViewModels with different validation applied to each of them, two classes that inherit from Person with different validation applied to each of them. But all roads seem to conflict with DRY principals and you end up somewhere along the line having the totally respecify all properties for the underlying class structure. You don't have to do this when you just have one validation rule set. So it very quickly becomes hell and not practical for complex objects.
Is this possible using DataAnnotations and what is the best DRY architecture?
Not sure what you mean by 'virtually duplicate and manually set each and every property manually in the original underlying class'. I've never liked the idea of buddy classes, and would personally recommend different view models for Admin and Public site (with appropriate validation set on each), and then mapping between the models using AutoMapper.
UPDATE:
Regading Automapper, the basic usage is something like this:
First you have to define your mappings. This lets automapper figure out in advance how to map objects. You only need to do this once in the application, so a good place to do this in an ASP.NET app is in Application_Start() in Global.asax. For each pair of classes you want to map between, call: Mapper.CreateMap<SourceType, DestinationType>();
Then, in your application code to do the map you just use:
var destinationObject = Mapper.Map<SourceType, DestinationType>(sourceOjbect);

Can an API in SOAP/WSDL be kept backwards compatible easily?

When using an IPC library, it is important that it provides the possibility that both client and server can communicate even when their version of the API differs. As I'm considering using SOAP for our client/server application, I wonder whether a SOAP/WSDL solution can deal with API changes well.
For example:
Adding parameters to existing functions
Adding variables to existing structs that are used in existing functions
Removing functions
Removing parameters from existing functions
Removing variables from existing structs that are used in existing functions
Changing the type of a parameter used in an existing function
Changing the order of parameters in an existing function
Changing the order of composite parts in an existing struct
Renaming existing functions
Renaming parameters
Note: by "struct" I mean a composite type
As far as I know there is not such stuff as per the SOAP/WSDL standard. But tools exists to cope with such issues. For instance, in Glassfish you can specify XSL stylesheet to transform the request/response of a web service. Other solution such as Oracle SOA suite offer much more elaborated tools to manage versioning of web service and integration of component together. Message can be routed automatically to different version of a web service and/or transformed. You will need to check what your target infrastructure offers.
EDIT:
XML and XSD is more flexible regarding evolution of the schema than types and serialization in object-oriented languages. Some stuff can be made backward compatible by simply declaring them as optional, e.g.
Adding parameters to existing functions - if a parameter is optional, you get a null value if the client doesn't send it
Adding variables to existing structure that are used in existing functions - if the value is optional, you get null if the client doesn't provide it
Removing functions - no magic here
Removing parameters from existing functions - parameters sent by the client will be superfluous according to the new definition and will be omitted
Removing variables from existing structure that are used in existing functions - I don't know in this case
Changing the type of a parameter used in an existing function - that depends on the change. For a simple type the serialization/deserialization may still work, e.g. String to int.
Note that I'm not 100% sure of the list. But a few tests can show you what works and what doesn't. The point is that XML is sent over the wire, so it gives some flexibility.
It doesn't. You'll have to manage that manually somehow. Typically by creating a new interface as you introduce major/breaking changes.
More generally speaking, this is an architectural problem, rather than a technical one. Once an interface is published, you really need to think about how to handle changes.

Entity Framework and Encapsulation

I would like to experimentally apply an aspect of encapsulation that I read about once, where an entity object includes domains for its attributes, e.g. for its CostCentre property, it contains the list of valid cost centres. This way, when I open an edit form for an Extension, I only need pass the form one Extension object, where I normally access a CostCentre object when initialising the form.
This also applies where I have a list of Extensions bound to a grid (telerik RadGrid), and I handle an edit command on the grid. I want to create an edit form and pass it an Extension object, where now I pass the edit form an ExtensionID and create my object in the form.
What I'm actually asking here is for pointers to guidance on doing this this way, or the 'proper' way of achieving something similar to what I have described here.
It would depend on your data source. If you are retrieving the list of Cost Centers from a database, that would be one approach. If it's a short list of predetermined values (like Yes/No/Maybe So) then property attributes might do the trick. If it needs to be more configurable per-environment, then IoC or the Provider pattern would be the best choice.
I think your problem is similar to a custom ad-hoc search page we did on a previous project. We decorated our entity classes and properties with attributes that contained some predetermined 'pointers' to the lookup value methods, and their relationships. Then we created a single custom UI control (like your edit page described in your post) which used these attributes to generate the drop down and auto-completion text box lists by dynamically generating a LINQ expression, then executing it at run-time based on whatever the user was doing.
This was accomplished with basically three moving parts: A) the attributes on the data access objects B) the 'attribute facade' methods at the middle-tier compiling and generation dynamic LINQ expressions and C) the custom UI control that called our middle-tier service methods.
Sometimes plans like these backfire, but in our case it worked great. Decorating our objects with attributes, then creating a single path of logic gave us just enough power to do what we needed to do while minimizing the amount of code required, and completely eliminated any boilerplate. However, this approach was not very configurable. By compiling these attributes into the code, we tightly coupled our application to the datasource. On this particular project it wasn't a big deal because it was a clients internal system and it fit the project timeline. However, on a "real product" implementing the logic with the Provider pattern or using something like the Castle Projects IoC would have allowed us the same power with a great deal more configurability. The downside of this is there is more to manage, and more that can go wrong with deployments, etc.