Validating and update entities in Zend Framework 2 + Doctrine 2 - forms

The title may not be as accurate, if someone finds a better one and can update it, please do so :)
I have a small CMS to edit users.
I'm using Zend Framework 2 + Doctrine 2.
I have a fieldset + form to add a user and (possibly) the same one to update them.
The User entity has the following fields:id, username, password, email.
The fieldset has two validators that check if the username and the email already exist.
Since I'm using the same to update the users, when I change for example the username of the user and keep the email the same, it throws an error that the "email exists" (which is normal due to the validator) and the same when I change the username and keep the email etc.
What I want is to avoid that behavior and make it so it checks them only when they are really changed/updated.
I thought of some ways, but I'm not sure what the "best" approach would be to this.
Hardcode the whole thing, by checking if the fields change and then do the validation (which makes the whole fieldset pretty much useless)
Make a function in the User entity that accepts an array with the new values, then compares them to the old ones and passes the changed ones to a "validation" function that returns the errors (which is mostly like the previous way, but I guess a bit more structured)
Write a validator and attach it to a new form which will query the db to check if the email/username exists and it's not already in use by the particular id, but I'm not quite sure on how to write it since I can't figure out how to pass the id and the field to the validator
I guess the 3rd one would be the best since it does 2 jobs at one time, by checking if the field changed and is not already in use by another user.
What do you suggest? How do you deal with that kind of scenario?
I can post any code that is needed, but I think this is more of a structural problem and that the code I used is too common and easy to figure out.

Related

CQRS - How to handle if a command requires data from db (query)

I am trying to wrap my head around the best way to approach this problem.
I am importing a file that contains bunch of users so I created a handler called
ImportUsersCommandHandler and my command is ImportUsersCommand that has List<User> as one of the parameters.
In the handler, for each user that I need to import I have to make sure that the UserType is valid, this is where the confusion comes in. I need to do a query against the database, to get list of all possible user types and than for each user I am importing, I want to verify that the user type id in the import matches one that is in the db.
I have 3 options.
Create a query GetUserTypesQuery and get the rest of this and then pass it on to the ImportUsersCommand as a list and verify inside the command handler
Call the GetUserTypesQuery from the command itself and not pass it (command calling another query)
Do not create a GetUsersTypeQuery and just do the query results within the command (still a query but no query/handler involved)
I feel like all these are dirty solutions and not the correct way to apply CQRS.
I agree option 1 sounds the best but would maybe suggest adding a pre handler to validate your input?
So ImportUsersCommandHandler deals with importing you data (and only that) and add a handler that runs before that validates (in your example, checks the user types and maybe other stuff) and bails out of it does not pass. So it queries the db, checks the usertypes and does whatever it needs to if it fails. Otherwise it just passes down to your business handler (ImportUsersCommandHandler).
I am used to using Mediatr in NET Core and this pattern works well (this is what we do) so sorry if this does not fit with your environment/setup!

How to properly access children by filtering parents in a single REST API call

I'm rewriting an API to be more RESTful, but I'm struggling with a design issue. I'll explain the situation first and then my question.
SITUATION:
I have two sets resources users and items. Each user has a list of item, so the resource path would like something like this:
api/v1/users/{userId}/items
Also each user has an isPrimary property, but only one user can be primary at a time. This means that if I want to get the primary user you'd do something like this:
api/v1/users?isPrimary=true
This should return a single "primary" user.
I have client of my API that wants to get the items of the primary user, but can't make two API calls (one to get the primary user and the second to get the items of the user, using the userId). Instead the client would like to make a single API call.
QUESTION:
How should I got about designing an API that fetches the items of a single user in only one API call when all the client has is the isPrimary query parameter for the user?
MY THOUGHTS:
I think I have a some options:
Option 1) api/v1/users?isPrimary=true will return the list of items along with the user data.
I don't like this one, because I have other API clients that call api/v1/users or api/v1/users?isPrimary=true to only get and parse through user data NOT item data. A user can have thousands of items, so returning those items every time would be taxing on both the client and the service.
Option 2) api/v1/users/items?isPrimary=true
I also don't like this because it's ugly and not really RESTful since there is not {userId} in the path and isPrimary isn't a property of items.
Option 3) api/v1/users?isPrimary=true&isShowingItems=true
This is like the first one, but I use another query parameter to flag whether or not to show the items belonging to the user in the response. The problem is that the query parameter is misleading because there is no isShowingItems property associated with a user.
Any help that you all could provide will be greatly appreciated. Thanks in advance.
There's no real standard solution for this, and all of your solutions are in my mind valid. So my answer will be a bit subjective.
Have you looked at HAL for your API format? HAL has a standard way to embed data from one resources into another (using _embedded) and it sounds like a pretty valid use-case for this.
The server can decide whether to embed the items based on a number of criteria, but one cheap solution might be to just add a query parameter like ?embed=items
Even if you don't use HAL, conceptually you could still copy this behavior similarly. Or maybe you only use _embedded. At least it's re-using an existing idea over building something new.
Aside from that practical solution, there is nothing in un-RESTful about exposing data at multiple endpoints. So if you created a resource like:
/v1/primary-user-with-items
Then this might be ugly and inconsistent with the rest of your API, but not inherently
'not RESTful' (sorry for the double negative).
You could include a List<User.Fieldset> parameter called fieldsets, and then include things if they are specified in fieldsets. This has the benefit that you can reuse the pattern by adding fieldsets onto any object in your API that has fields you might wish to include.
api/v1/users?isPrimary=true&fieldsets=items

Master Data Services - Domain based attributes

We are using Master Data Services as an MDM solution for our SQL Server BI environment. I have an entity containing a first name and last name and then I have created a business rule that concatenates these two fields to form a full name which is then stored in the "name" system field of the entity.
I use this as a domain based entity in another entity. Then the user can then see the full name before linking it as a attribute in the second entity.
I want to be able to restrict the users from capturing data in the first entity against the name attribute because the business rule deals with the logic to populate this attribute. I have read that there are two ways to do this:
Set the display width to zero of the attribute. This does not seem to work, the explorer version still shows a narrow version of the field in the rows and the user can still edit the field in the detail pane.
Use the security to make the attribute read only. I have tried different combinations of this but it seems that you cannot use this functionality for a name field (system field).
This seems like pretty basic functionality that I require and it seems that there is no clear cut way to do this in MDS.
Any assistance will be appreciated.
Thanks
We do exactly the same thing.
I tested it, and whether you create a new member, or edit an existing member, the business rule just overwrites the manual input value in the name attribute.
Is there a specific 'business' reason why you need to restrict data input in the name field? If it is for Ux reasons, you can change the display name of the name attribute to something like 'Don't populate' or alternatively make it a '.', then the users won't know what to input.

Form Validation and Security with Meteor Methods

Lets assume one uses a form to update a doc in a collection.
Generally, upon submit, one would use some type of form validation process to verify the sanity of the fields in the form. Then after the data verifies, lets assume that the data is passed to a meteor method to actually update the collection.
But theoretically, a user could use the javascript console to fabricate a meteor call to the same update method. For reasons of security, in order to validate submissions made via the console, doesn't this imply that the fields must be verified for sanity in methods too?
So, for normal submission cases via the form, this will cause the same fields to be verified twice (once during form validation, and once within the method).
Is there an elegant way to get around the redundant verification, or must all methods have a redundant field verification step?
You should consider using aldeed:collection2 for validating updates to collections. Normally you define your schema in /lib and then updates will be validated both on the client and on the server but you only have to write the code once. If you want to avoid double work then only validate on the server because you can't trust the client. This is not recommended because the cost of client-side validation is borne by your user, not your server. You can create a better UX if you validate the fields as they are entered instead of onSubmit because you will give the user feedback earlier.
My basic validation approach:
Event handler on each form field on change event change(){}. This does things like making the field border green for a valid entry, red X for an invalid one.
Collection2 validates document inserts/updates on client
Methods validate their arguments
Collection2 validates document inserts/updates on server
More reading:
http://0rocketscience.blogspot.com/2015/07/meteor-security-no-2-all-praise-aldeed.html
http://0rocketscience.blogspot.com/2015/12/meteor-security-no-4-extending-match.html

Do Salesforce VF email templates require related object to be persisted?

When a new lead comes in, I want to use a before trigger and a Visualforce email template that contains lead field values to send an email using the SingleEmailMessage class. The email is being generated, but all of the lead fields are null even though (known via System.Debug) they do have values going into the call.
Since I'm passing the still-unsaved lead Id via the mail.setWhatId(lead.Id) method, I'm beginning to think that the mail class is using the Id value and trying to do a database look-up rather than as a reference to the still unsaved lead in memory.
Does anyone know if that's the case? My class works flawlessly when the lead already exists.
If it is the case that the Apex mail class does a DB read, any pattern suggestions for the case where one needs to send and email and update a lead field value before the lead is saved? I can't use the Workflow email notification because the email is being addressed to customers, and there's some additional Apex code that sorts out what address to fetch from existing Account records based on some Lead fields--hence I think the need for using VF email templates in the first place.
setWhatId (and pretty much any method that takes an ID value as an argument) definitely does expect the row to be persisted already. To get around this, you should be able to just do your field update in the before trigger, and add an after trigger to send the email.