Why does Optional not extend Supplier - guava

I used Supplier quite often and I'm looking at new Guava 10 Optional now.
In contrast to a Supplier an Optional guarantees never to return null but will throw an IllegalStateException instead. In addition it is immutable and thus it has a fixed known value once it is created. In contrast to that a Supplier may be used to create different or lazy values triggered by calling get() (but it is not imposed to do so).
I followed the discussion about why an Optional should not extend a Supplier and I found:
...it would not be a well-behaved Supplier
But I can't see why, as Supplier explicitly states:
No guarantees are implied by this interface.
For me it would fit, but it seems I used to employ Suppliers in a different way as it was originally intended. Can someone please explain to me why an Optional should NOT be used as a Supplier?
Yes: it is quite easy to convert an Optional into a Supplier (and in addition you may choose if the adapted Supplier.get() will return Optional.get() or Optional.orNull())
but you need some additional transformation and have to create new objects for each :-(
Seems there is some mismatch between the intended use of a Supplier and my understanding of its documentation.
Dieter.

Consider the case of
Supplier<String> s = Optional.absent();
Think about this. You have a type containing one method, that takes no arguments, but for which it's a programmer error to ever invoke that method! Does that really make sense?
You'd only want Supplierness for "present" optionals, but then, just use Suppliers.ofInstance.

A Supplier is generally expected to be capable of returning objects (assuming no unexpected errors occur). An Optional is something that explicitly may not be capable of returning objects.
I think "no guarantees are implied by this interface" generally means that there are no guarantees about how it retrieves an object, not that the interface need not imply the ability to retrieve an object at all. Even if you feel it is OK for a Supplier instance to throw an exception every time you call get() on it, the Guava authors do not feel that way and choose to only provide suppliers that can be expected to be generally well-behaved.

Related

DDD can event handler construct value object for aggregate

Can I construct a value object in the event handler or should I pass the parameters to the aggregate to construct the value object itself? Seller is the aggregate and offer is the value object. Will it be better for the aggregate to pass the value object in the event?
public async Task HandleAsync(OfferCreatedEvent domainEvent)
{
var seller = await this.sellerRepository.GetByIdAsync(domainEvent.SellerId);
var offer = new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
seller.AddOffer(offer);
}
should I pass the parameters to the aggregate to construct the value object itself?
You should probably default to passing the assembled value object to the domain entity / root entity.
The supporting argument is that we want to avoid polluting our domain logic with plumbing concerns. Expressed another way, new is not a domain concept, so we'd like that expression to live "somewhere else".
Note: that by passing the value to the domain logic, you protect that logic from changes to the construction of the values; for instance, how much code has to change if you later discover that there should be a fourth constructor argument?
That said, I'd consider this to be a guideline - in cases where you discover that violating the guideline offers significant benefits, you should violate the guideline without guilt.
Will it be better for the aggregate to pass the value object in the event?
Maybe? Let's try a little bit of refactoring....
// WARNING: untested code ahead
public async Task HandleAsync(OfferCreatedEvent domainEvent)
{
var seller = await this.sellerRepository.GetByIdAsync(domainEvent.SellerId);
Handle(domainEvent, seller);
}
static Handle(OfferCreatedEvent domainEvent, Seller seller)
{
var offer = new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
seller.AddOffer(offer);
}
Note the shift - where HandleAsync needs to be aware of async/await constructs, Handle is just a single threaded procedure that manipulates two local memory references. What that procedure does is copy information from the OfferCreatedEvent to the Seller entity.
The fact that Handle here can be static, and has no dependencies on the async shell, suggests that it could be moved to another place; another hint being that the implementation of Handle requires a dependency (Offer) that is absent from HandleAsync.
Now, within Handle, what we are "really" doing is copying information from OfferCreatedEvent to Seller. We might reasonably choose:
seller.AddOffer(domainEvent);
seller.AddOffer(domainEvent.offer());
seller.AddOffer(new Offer(domainEvent));
seller.AddOffer(new Offer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id));
seller.AddOffer(domainEvent.BuyerId, domainEvent.ProductId, seller.Id);
These are all "fine" in the sense that we can get the machine to do the right thing using any of them. The tradeoffs are largely related to where we want to work with the information in detail, and where we prefer to work with the information as an abstraction.
In the common case, I would expect that we'd use abstractions for our domain logic (therefore: Seller.AddOffer(Offer)) and keep the details of how the information is copied "somewhere else".
The OfferCreatedEvent -> Offer function can sensibly live in a number of different places, depending on which parts of the design we think are most stable, how much generality we can justify, and so on.
Sometimes, you have to do a bit of war gaming: which design is going to be easiest to adapt if the most likely requirements change happens?
I would also advocate for passing an already assembled value object to the aggregate in this situation. In addition to the reasons already mentioned by #VoiceOfUnreason, this also fits more naturally with the domain language. Also, when reading code and method APIs you can then focus on domain concepts (like an offer) without being distracted by details until you really need to know them.
This becomes even more important if you would need to pass in more then one value object (or entity). Rather passing in all the values required for construction as parameters not only makes the API more resilient to refactoring but also burdens the reader with more details.
The seller is receiving an offer.
Assuming this is what is meant here, fits better than something like the following:
The seller receives some buyer id, product id, etc.
This most probably would not be found in conversations using the ubiquitous language. In my opinion code should be as readable as possible and express the behaviour and business logic as close to human language as possible. Because you compile code for machines to execute it but the way you write it is for humans to easily understand it.
Note: I would even consider using factory methods on value objects in certain cases to unburden the client code of knowing what else might be needed to assemble a valid value object, for instance, if there are different valid constellations and ways of constructing the same value objects where some values need reasonable default values or values are chosen by the value object itself. In more complex situations a separate factory might even make sense.

Keyed registration with implicit types and no defaults

When I register a type with autofac and use PreserveExistingDefaults, it registers a default for T if no previous registrations for that type exist. I know this is how it is intended, but is there a way to have it not register a default at all, but still get registered for implicit types?
My use case is that I want to (1) force consumers of type T to rely on a keyed registration (i.e. throw an exception if T is requested without a key filter), and (2) I also want those registrations to show up in IEnumerable<T> implicit injections.
I saw in this answer that I can just add an As<T> registration to accomplish (2), but it also registers a default so I don't get (1).
Unfortunately, this is not a use case Autofac addresses.
I might suggest that there's a logic flaw in the design anyway. If a developer can't resolve a single T without a key, but they can resolve all the T without keys, then folks will simply work around the problem by resolving all of them and manually choosing the one they want. I would recommend revisiting the requirements so you don't have the situation you describe.

working around type erasure -- recommended way?

After Scala-2.10 the situation has changed, since there is now a dedicated reflection system.
What is the recommended, best-practice, standard way the community has settled down on in order to amend the deficiencies created by type erasure?
The situation
We all know that the underlying runtime system (JVM / bytecode) is lacking the ability to fully represent parametrised types in a persistent way. This means that the Scala type system can express elaborate type relationships, which lack an unambiguous representation in plain JVM byte code.
Thus, when creating a concrete data instance, the context of the creation contains specific knowledge about the fine points of the embedded data. As long as the creation context is connected to the usage context in a statical fashion, i.e. as long as both are connected directly within a single compilation process, everything is fine, since we stay in "scala realm" where any specific type knowledge can be passed within the compiler.
But as soon as our data instance (object instance) passes a zone where only JVM bytecode properties are guaranteed, this link is lost. This might happen e.g. when
sending the data element as message to another Actor
passing it through subsystems written in other JVM languages (e.g. an OR-Mapper and RDBMS storage)
feeding the data through JVM Serialisation and any marshalling techniques built on top
and even just within Scala, passing through any function signature which discards specific type parameter information and retains only some type bound (e.g. Seq[AnyRef]) or an existential type (Seq[_])
Thus we need a way to marshal the additional specific type information.
An Example
Let's assume we use a type Document[Format]. Documents are sent and retrieved through a family of external service APIs, which mostly talk in JSON format (and are typically not confined to usage from Java alone). Obviously, for some specific kinds of Format, we can write type classes to hold the knowledge how to parse that JSON and convert it into explicit Scala types. But clearly there is no hope for one coherent type hierarchy to cover any kind of Document[Format] (beyond the mere fact that it is a formatted document and can be converted). It figures that we can handle all the generic concerns elegantly (distributing load, handling timeouts and availability of some API / service, keeping a persistent record of data). But for any actual "business" functionality, we need to switch over to specific types eventually.
The Quest
Since the JVM bytecode can not represent the type information we need, without any doubt we need to allocate some additional metadata field within our Document[Format] to represent the information "this document has Format XYZ". So, by looking at that piece of metadata, we can re-gain the fully typed context later on.
My question is about the preferred, most adequate, most elegant, most idiomatic way of solving this problem. That is, in current Scala (>= Scala-2.10).
how to represent the additional type information (i.e. the Format in the example above)? Storing Format.class in the document object? or using a type tag? or would you rather recommend a symbolic representation, e.g. 'Format or "my.package.Format"
or would you rather recommend to store the full type information, e.g. Document[Format], and which representation is recommended?
what is the most concise, most clear, most clean, most readable and self-explanatory solution in code to re-establish the full type context? Using some kind of pattern match? Using some implicit? Using a type class or view bound?
What have people found out to work well in this situation?
Scala documentation: http://docs.scala-lang.org/overviews/reflection/typetags-manifests.html
From the article:
Like scala.reflect.Manifest, TypeTags can be thought of as objects
which carry along all type information available at compile time, to
runtime. For example, TypeTag[T] encapsulates the runtime type
representation of some compile-time type T. Note however, that
TypeTags should be considered to be a richer replacement of the
pre-2.10 notion of a Manifest, that are additionally fully integrated
with Scala reflection.
You should also look at Manifests.

How to extend the Java API to be able to introduce new annotations [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Can you explain me how I can extend or change the JAVA-API to using two new Annotations #any and #option to allow Multiplicies in Java?
The main idea for the Multiplicities is the follow:
Multiplicities help to solve many maintenance problems, when we change a to-many-relationship into a to-one-relationship or vice-versa.
I would like to use the above annotations for "fields", "parameter methods" and "return parameters".
For example:
class MyClass {
String #any name; // instead of using List<String>, i would like to use #any
public String setFullname(String #option name) { // #option is another multiplicity
...
}
}
To allow this definition, i have to change the JAVA-API and extends it with these two annotations, but I don't know how to do this.
Can you tell me how to change the API, and which steps I must follow to achieve my requirements?
Please look at this paper to understand the issue.
As explained in that paper, using multiplicities for building to-many relationship causes number of problems:
"It makes maintenance tedious and error-prone."
<< If the maintenance of a program requires a change of a relationship from to-one to to-many (or vice versa), nearly every occurrence of the variable representing this relationship in the program needs to be touched. In a language with static type checking, these occurrences
will be identified by the compiler (as type errors) after the declaration of the field has been changed so that at least, no use is forgotten; in a language without it, the change is extremely error-prone>>
"It changes conditions of subtyping"
<< If S is a subtype of T, an expression of type S can be assigned to a variable (representing a to-one relationship) of type T. When the relationship is upgraded to to-many and the types of the expression and variable are changed to Collection and Collection to reflect this, the assignment is no longer well-typed [18]. To fix this, the use of a (former to-one and now tomany) variable must be restricted to retrieving elements of its collection, which may require substantial further code changes. We consider this dependency of
subtyping on multiplicity to be awkward.>>
"It changes call semantics"
Yet another manifestation of the discontinuity is that when a variable holding a related object is used as an actual parameter of a method
call with call-by-value semantics, the method cannot change the value of the variable (i.e., to which object the variable points), and thus cannot change which object
the variable’s owner is related to. By contrast, when the variable holds a collection of related objects, passing this variable by-value to a method allows the
method to add and remove from the collection, and thus to change which objects the variable’s owner is related to, effectively giving the call by-reference semantics.
We consider this dependency of semantics on multiplicity to be awkward. Surely, there is an easy fix to all problems stemming from the noted discontinuity:
implement to-one relationships using containers also. For instance, the Option class in Scala has two subclasses, Some and None, where Some wraps an object of type
E with an object of type Option, which can be substituted by None, so that the object and no object have a uniform access protocol (namely that of Option). By making Option
implement the protocol of Collection, the above noted discontinuity will mostly disappear. However, doing so generalizes the problems of collections that stem from
putting the content over the container. Specifically:
"Related objects have to be unwrapped before they can be used".
Using containers for keeping related objects, the operations executable on a variable representing the relationship are the operations of the container and not of the related objects. For instance, if cookies have the operation beNibbled(), the same operation can
typically not be expected from a collection of cookies (since Collection is a general
purpose class).
"It subjects substitutability to the rules of subtyping of generics". While the difference in subtyping between to-one and to-many variables (item 2 above) has been
removed, the wrong version has survived: now, a to-one relationship with target type T, implemented as a field having type Option, cannot relate to an object of T’s subtype S (using Option, unless restrictions regarding replacement of the object are accepted).
"It introduces an aliasing problem".
While aliasing is a general problem of objectoriented programming (see, e.g., [11, 19]), the use of container objects to implement relationships introduces the special problem of aliasing the container: when two objects share the same container, the relationship of one object cannot evolve differently from that of the other. This may however not model the domain correctly, and can lead to subtle programming errors.
"Expressive poverty".
More generally, given only collections it is not possible to
express the difference between “object a has a collection, which contains objects b1 through bn” and “object a has objects b1 through bn”. While one might maintain that the former is merely the object-oriented way of representing the latter, and that the used collection is merely an implementation object, it could be the case that the collection is actually a domain object (as which it could even have aliases; cf. above). In object-oriented modelling, by contrast, collections serving as implementation classes are abstracted from by specifying multiplicities larger than 1 (possibly complemented by constraints on the type of the collection, i.e., whether it is ordered, has duplicates, etc.). A collection class in a domain model is therefore always a domain class.
The following figure highlights these problems using a sample program from the internet service provider domain.
http://infochrist.net/coumcoum/Multiplicities.png
Initially, a customer can have a single email account which, according to the price plan selected, is either a POP3 or an IMAP account. Accounts are created by a factory (static method Account.make, Figure 1 left, beginning in line 4) and, for reasons of symmetry, are also deleted by a static method (Account.delete; line 19); due to Java’s lack of support for calling by reference (or out parameters), however, delete does not work as expected. Therefore, resetting of the field account to null has been replicated in the method Customer.transferAccount (line 40). When the program is upgraded to support multiple accounts per customer, the first change is to alter the type of account to List (Figure 1 right, line 30). As suggested by above Problem 1, this entails a number of changes. In class Customer it requires the introduction of an iteration over all accounts (line 35), and the replacement of the receiver of the method set, account, with the iteration variable a (Problem 4). In class Account, make must be changed to return a list of accounts (line 4), and the construction of accounts (lines 7 and 12) must be replaced by the construction of lists that contain a single account of the appropriate type. Admittedly, making the Account factory return a list seems awkward; yet, as we will see, it only demonstrates Problem 7. Also, it brings about a change in the conditions of subtyping (Problem 2): for make to be well-typed (which it is not in Figure 1, right), its return type would either have to be changed to List (requiring a corresponding change of the type of Customer.account, thus limiting account’s use to read access; Problem 5), or the created lists would need to be changed to element type Account. The parameter type of Account.delete needs to be changed to List also; replacing the assignment
of null with clearing the list (line 20) to better reflect the absence of an account (cf. the above discussions on the different meanings of null) makes delete work as intended, which may however change the semantics of a program actually calling delete (Problem 3). An analogous change from assigning null to calling clear() in class Account, line 40, introduces a logical error, since the transferred account is accidentally cleared as well (Problem 6).
The solution is to use multiplicities, as followed (look at the comment below for the image):
The question is now, how can I implement multiplicities in Java?
You are confused about what API means. To implement this idea, you would need to edit the source code of the Java compiler, and what you would end up with would no longer be Java, it would be a forked version of Java, which you would have to call something else.
I do not think this idea has much merit, to be honest.
It's unclear why you think this would solve your problem, and using a non-standard JDK will -- in fact -- give you an even greater maintenance burden. For example, when there are new versions of the JDK, you will need to apply your updates to the new version as well when you upgrade. Not to mention the fact that new employees you hire will not be familiar with your language that deviates from Java.
Java does allow one to define custom annotations:
http://docs.oracle.com/javase/1.5.0/docs/guide/language/annotations.html
... and one can use reflection or annotation processors to do cool things with them. However, annotations cannot be used to so drastically change the program semantics (like magically making a String mean a List of strings, instead) without forking your own version of the JDK, which is a bad idea.

How to accumulate errors in a functional way upon validating database object?

I have Product case class, which is returned by DAO layer (using Salat). User who is creating a product first time status of the product remains as "draft" where no field (of product) is mandatory.
What are the best functional ways to validate 10 of product's attributes, accumulate all validation errors into a single entity and then pass all errors at once in a JSON format to front end?
I assume the core of the question is how to accumulate errors--JSON formatting is a separate issue and does not depend upon how you have collected your errors.
If it's really just a validation issue, you can have a series of methods
def problemWithX: Option[String] = ...
which return Some(errorMessage) if they are invalid or None if they're okay. Then it's as simple as
List(problemWithX, problemWithY, ...).flatten
to create a list of all of your errors. If your list is empty, you're good to go. If not, you have the errors listed. Creating some sensible error report is the job of the problemWithX method--and of course you need to decide whether merely a string or more complex information is necessary. (You may even need to define a Invalid trait and have classes extend it to handle different conditions.)
This is exactly what ScalaZ's Validation type is for.