How to accumulate errors in a functional way upon validating database object? - scala

I have Product case class, which is returned by DAO layer (using Salat). User who is creating a product first time status of the product remains as "draft" where no field (of product) is mandatory.
What are the best functional ways to validate 10 of product's attributes, accumulate all validation errors into a single entity and then pass all errors at once in a JSON format to front end?

I assume the core of the question is how to accumulate errors--JSON formatting is a separate issue and does not depend upon how you have collected your errors.
If it's really just a validation issue, you can have a series of methods
def problemWithX: Option[String] = ...
which return Some(errorMessage) if they are invalid or None if they're okay. Then it's as simple as
List(problemWithX, problemWithY, ...).flatten
to create a list of all of your errors. If your list is empty, you're good to go. If not, you have the errors listed. Creating some sensible error report is the job of the problemWithX method--and of course you need to decide whether merely a string or more complex information is necessary. (You may even need to define a Invalid trait and have classes extend it to handle different conditions.)

This is exactly what ScalaZ's Validation type is for.

Related

DDD can event handler construct value object for aggregate

Can I construct a value object in the event handler or should I pass the parameters to the aggregate to construct the value object itself? Seller is the aggregate and offer is the value object. Will it be better for the aggregate to pass the value object in the event?
public async Task HandleAsync(OfferCreatedEvent domainEvent)
{
var seller = await this.sellerRepository.GetByIdAsync(domainEvent.SellerId);
var offer = new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
seller.AddOffer(offer);
}
should I pass the parameters to the aggregate to construct the value object itself?
You should probably default to passing the assembled value object to the domain entity / root entity.
The supporting argument is that we want to avoid polluting our domain logic with plumbing concerns. Expressed another way, new is not a domain concept, so we'd like that expression to live "somewhere else".
Note: that by passing the value to the domain logic, you protect that logic from changes to the construction of the values; for instance, how much code has to change if you later discover that there should be a fourth constructor argument?
That said, I'd consider this to be a guideline - in cases where you discover that violating the guideline offers significant benefits, you should violate the guideline without guilt.
Will it be better for the aggregate to pass the value object in the event?
Maybe? Let's try a little bit of refactoring....
// WARNING: untested code ahead
public async Task HandleAsync(OfferCreatedEvent domainEvent)
{
var seller = await this.sellerRepository.GetByIdAsync(domainEvent.SellerId);
Handle(domainEvent, seller);
}
static Handle(OfferCreatedEvent domainEvent, Seller seller)
{
var offer = new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
seller.AddOffer(offer);
}
Note the shift - where HandleAsync needs to be aware of async/await constructs, Handle is just a single threaded procedure that manipulates two local memory references. What that procedure does is copy information from the OfferCreatedEvent to the Seller entity.
The fact that Handle here can be static, and has no dependencies on the async shell, suggests that it could be moved to another place; another hint being that the implementation of Handle requires a dependency (Offer) that is absent from HandleAsync.
Now, within Handle, what we are "really" doing is copying information from OfferCreatedEvent to Seller. We might reasonably choose:
seller.AddOffer(domainEvent);
seller.AddOffer(domainEvent.offer());
seller.AddOffer(new Offer(domainEvent));
seller.AddOffer(new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id));
seller.AddOffer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
These are all "fine" in the sense that we can get the machine to do the right thing using any of them. The tradeoffs are largely related to where we want to work with the information in detail, and where we prefer to work with the information as an abstraction.
In the common case, I would expect that we'd use abstractions for our domain logic (therefore: Seller.AddOffer(Offer)) and keep the details of how the information is copied "somewhere else".
The OfferCreatedEvent -> Offer function can sensibly live in a number of different places, depending on which parts of the design we think are most stable, how much generality we can justify, and so on.
Sometimes, you have to do a bit of war gaming: which design is going to be easiest to adapt if the most likely requirements change happens?
I would also advocate for passing an already assembled value object to the aggregate in this situation. In addition to the reasons already mentioned by #VoiceOfUnreason, this also fits more naturally with the domain language. Also, when reading code and method APIs you can then focus on domain concepts (like an offer) without being distracted by details until you really need to know them.
This becomes even more important if you would need to pass in more then one value object (or entity). Rather passing in all the values required for construction as parameters not only makes the API more resilient to refactoring but also burdens the reader with more details.
The seller is receiving an offer.
Assuming this is what is meant here, fits better than something like the following:
The seller receives some buyer id, product id, etc.
This most probably would not be found in conversations using the ubiquitous language. In my opinion code should be as readable as possible and express the behaviour and business logic as close to human language as possible. Because you compile code for machines to execute it but the way you write it is for humans to easily understand it.
Note: I would even consider using factory methods on value objects in certain cases to unburden the client code of knowing what else might be needed to assemble a valid value object, for instance, if there are different valid constellations and ways of constructing the same value objects where some values need reasonable default values or values are chosen by the value object itself. In more complex situations a separate factory might even make sense.

Keyed registration with implicit types and no defaults

When I register a type with autofac and use PreserveExistingDefaults, it registers a default for T if no previous registrations for that type exist. I know this is how it is intended, but is there a way to have it not register a default at all, but still get registered for implicit types?
My use case is that I want to (1) force consumers of type T to rely on a keyed registration (i.e. throw an exception if T is requested without a key filter), and (2) I also want those registrations to show up in IEnumerable<T> implicit injections.
I saw in this answer that I can just add an As<T> registration to accomplish (2), but it also registers a default so I don't get (1).
Unfortunately, this is not a use case Autofac addresses.
I might suggest that there's a logic flaw in the design anyway. If a developer can't resolve a single T without a key, but they can resolve all the T without keys, then folks will simply work around the problem by resolving all of them and manually choosing the one they want. I would recommend revisiting the requirements so you don't have the situation you describe.

Which exception to throw when I find my data in inconsistent state in Scala?

I have a small Scala program which reads data from a data source. This data source is currently a .csv file, so it can contain data inconsistencies.
When implementing a repository pattern for my data, I implemented a method which will return an object by a specific field which should be unique. However, I can't guarantee that it will really be unique, as in a .csv file, I can't enforce data quality in a way I could in a real database.
So, the method checks whether there are one or zero objects with the requested field value in the repository, and that goes well. But I don't know Scala well (or Java for that matter), and the charts of the Java exception hierarchy which I found were not very helpful. Which would be the appropriate exception to throw if there are two objects with the same supposedly unique value. What should I use?
There are two handy exceptions for such cases: IllegalStateException and IllegalArgumentException. First one is used when object internal state is in some illegal position (say, you calling connect twice) and the last one (which seems to be more suitable to your case) is used when there is the data that comes from the outside world and it does not satisfy some prescribed conditions: e.g. negative value, when function is supposed to work with zero & positive values.
Both are not something that should be handled programmatically on the caller side (with the try/catch) -- they signify illegal usage of api and/or logical errors in program flow and such errors has to be fixed during the development (in your case, they have to inform developer who is passing that data, that specific field has to contain only unique values).
You can always use a customized Exception and in case this is a web API you might want to map your exception to: Bad Request (400) code.

How to extend the Java API to be able to introduce new annotations [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Can you explain me how I can extend or change the JAVA-API to using two new Annotations #any and #option to allow Multiplicies in Java?
The main idea for the Multiplicities is the follow:
Multiplicities help to solve many maintenance problems, when we change a to-many-relationship into a to-one-relationship or vice-versa.
I would like to use the above annotations for "fields", "parameter methods" and "return parameters".
For example:
class MyClass {
String #any name; // instead of using List<String>, i would like to use #any
public String setFullname(String #option name) { // #option is another multiplicity
...
}
}
To allow this definition, i have to change the JAVA-API and extends it with these two annotations, but I don't know how to do this.
Can you tell me how to change the API, and which steps I must follow to achieve my requirements?
Please look at this paper to understand the issue.
As explained in that paper, using multiplicities for building to-many relationship causes number of problems:
"It makes maintenance tedious and error-prone."
<< If the maintenance of a program requires a change of a relationship from to-one to to-many (or vice versa), nearly every occurrence of the variable representing this relationship in the program needs to be touched. In a language with static type checking, these occurrences
will be identified by the compiler (as type errors) after the declaration of the field has been changed so that at least, no use is forgotten; in a language without it, the change is extremely error-prone>>
"It changes conditions of subtyping"
<< If S is a subtype of T, an expression of type S can be assigned to a variable (representing a to-one relationship) of type T. When the relationship is upgraded to to-many and the types of the expression and variable are changed to Collection and Collection to reflect this, the assignment is no longer well-typed [18]. To fix this, the use of a (former to-one and now tomany) variable must be restricted to retrieving elements of its collection, which may require substantial further code changes. We consider this dependency of
subtyping on multiplicity to be awkward.>>
"It changes call semantics"
Yet another manifestation of the discontinuity is that when a variable holding a related object is used as an actual parameter of a method
call with call-by-value semantics, the method cannot change the value of the variable (i.e., to which object the variable points), and thus cannot change which object
the variable’s owner is related to. By contrast, when the variable holds a collection of related objects, passing this variable by-value to a method allows the
method to add and remove from the collection, and thus to change which objects the variable’s owner is related to, effectively giving the call by-reference semantics.
We consider this dependency of semantics on multiplicity to be awkward. Surely, there is an easy fix to all problems stemming from the noted discontinuity:
implement to-one relationships using containers also. For instance, the Option class in Scala has two subclasses, Some and None, where Some wraps an object of type
E with an object of type Option, which can be substituted by None, so that the object and no object have a uniform access protocol (namely that of Option). By making Option
implement the protocol of Collection, the above noted discontinuity will mostly disappear. However, doing so generalizes the problems of collections that stem from
putting the content over the container. Specifically:
"Related objects have to be unwrapped before they can be used".
Using containers for keeping related objects, the operations executable on a variable representing the relationship are the operations of the container and not of the related objects. For instance, if cookies have the operation beNibbled(), the same operation can
typically not be expected from a collection of cookies (since Collection is a general
purpose class).
"It subjects substitutability to the rules of subtyping of generics". While the difference in subtyping between to-one and to-many variables (item 2 above) has been
removed, the wrong version has survived: now, a to-one relationship with target type T, implemented as a field having type Option, cannot relate to an object of T’s subtype S (using Option, unless restrictions regarding replacement of the object are accepted).
"It introduces an aliasing problem".
While aliasing is a general problem of objectoriented programming (see, e.g., [11, 19]), the use of container objects to implement relationships introduces the special problem of aliasing the container: when two objects share the same container, the relationship of one object cannot evolve differently from that of the other. This may however not model the domain correctly, and can lead to subtle programming errors.
"Expressive poverty".
More generally, given only collections it is not possible to
express the difference between “object a has a collection, which contains objects b1 through bn” and “object a has objects b1 through bn”. While one might maintain that the former is merely the object-oriented way of representing the latter, and that the used collection is merely an implementation object, it could be the case that the collection is actually a domain object (as which it could even have aliases; cf. above). In object-oriented modelling, by contrast, collections serving as implementation classes are abstracted from by specifying multiplicities larger than 1 (possibly complemented by constraints on the type of the collection, i.e., whether it is ordered, has duplicates, etc.). A collection class in a domain model is therefore always a domain class.
The following figure highlights these problems using a sample program from the internet service provider domain.
http://infochrist.net/coumcoum/Multiplicities.png
Initially, a customer can have a single email account which, according to the price plan selected, is either a POP3 or an IMAP account. Accounts are created by a factory (static method Account.make, Figure 1 left, beginning in line 4) and, for reasons of symmetry, are also deleted by a static method (Account.delete; line 19); due to Java’s lack of support for calling by reference (or out parameters), however, delete does not work as expected. Therefore, resetting of the field account to null has been replicated in the method Customer.transferAccount (line 40). When the program is upgraded to support multiple accounts per customer, the first change is to alter the type of account to List (Figure 1 right, line 30). As suggested by above Problem 1, this entails a number of changes. In class Customer it requires the introduction of an iteration over all accounts (line 35), and the replacement of the receiver of the method set, account, with the iteration variable a (Problem 4). In class Account, make must be changed to return a list of accounts (line 4), and the construction of accounts (lines 7 and 12) must be replaced by the construction of lists that contain a single account of the appropriate type. Admittedly, making the Account factory return a list seems awkward; yet, as we will see, it only demonstrates Problem 7. Also, it brings about a change in the conditions of subtyping (Problem 2): for make to be well-typed (which it is not in Figure 1, right), its return type would either have to be changed to List (requiring a corresponding change of the type of Customer.account, thus limiting account’s use to read access; Problem 5), or the created lists would need to be changed to element type Account. The parameter type of Account.delete needs to be changed to List also; replacing the assignment
of null with clearing the list (line 20) to better reflect the absence of an account (cf. the above discussions on the different meanings of null) makes delete work as intended, which may however change the semantics of a program actually calling delete (Problem 3). An analogous change from assigning null to calling clear() in class Account, line 40, introduces a logical error, since the transferred account is accidentally cleared as well (Problem 6).
The solution is to use multiplicities, as followed (look at the comment below for the image):
The question is now, how can I implement multiplicities in Java?
You are confused about what API means. To implement this idea, you would need to edit the source code of the Java compiler, and what you would end up with would no longer be Java, it would be a forked version of Java, which you would have to call something else.
I do not think this idea has much merit, to be honest.
It's unclear why you think this would solve your problem, and using a non-standard JDK will -- in fact -- give you an even greater maintenance burden. For example, when there are new versions of the JDK, you will need to apply your updates to the new version as well when you upgrade. Not to mention the fact that new employees you hire will not be familiar with your language that deviates from Java.
Java does allow one to define custom annotations:
http://docs.oracle.com/javase/1.5.0/docs/guide/language/annotations.html
... and one can use reflection or annotation processors to do cool things with them. However, annotations cannot be used to so drastically change the program semantics (like magically making a String mean a List of strings, instead) without forking your own version of the JDK, which is a bad idea.

Why does Optional not extend Supplier

I used Supplier quite often and I'm looking at new Guava 10 Optional now.
In contrast to a Supplier an Optional guarantees never to return null but will throw an IllegalStateException instead. In addition it is immutable and thus it has a fixed known value once it is created. In contrast to that a Supplier may be used to create different or lazy values triggered by calling get() (but it is not imposed to do so).
I followed the discussion about why an Optional should not extend a Supplier and I found:
...it would not be a well-behaved Supplier
But I can't see why, as Supplier explicitly states:
No guarantees are implied by this interface.
For me it would fit, but it seems I used to employ Suppliers in a different way as it was originally intended. Can someone please explain to me why an Optional should NOT be used as a Supplier?
Yes: it is quite easy to convert an Optional into a Supplier (and in addition you may choose if the adapted Supplier.get() will return Optional.get() or Optional.orNull())
but you need some additional transformation and have to create new objects for each :-(
Seems there is some mismatch between the intended use of a Supplier and my understanding of its documentation.
Dieter.
Consider the case of
Supplier<String> s = Optional.absent();
Think about this. You have a type containing one method, that takes no arguments, but for which it's a programmer error to ever invoke that method! Does that really make sense?
You'd only want Supplierness for "present" optionals, but then, just use Suppliers.ofInstance.
A Supplier is generally expected to be capable of returning objects (assuming no unexpected errors occur). An Optional is something that explicitly may not be capable of returning objects.
I think "no guarantees are implied by this interface" generally means that there are no guarantees about how it retrieves an object, not that the interface need not imply the ability to retrieve an object at all. Even if you feel it is OK for a Supplier instance to throw an exception every time you call get() on it, the Guava authors do not feel that way and choose to only provide suppliers that can be expected to be generally well-behaved.