Keyed registration with implicit types and no defaults - autofac

When I register a type with autofac and use PreserveExistingDefaults, it registers a default for T if no previous registrations for that type exist. I know this is how it is intended, but is there a way to have it not register a default at all, but still get registered for implicit types?
My use case is that I want to (1) force consumers of type T to rely on a keyed registration (i.e. throw an exception if T is requested without a key filter), and (2) I also want those registrations to show up in IEnumerable<T> implicit injections.
I saw in this answer that I can just add an As<T> registration to accomplish (2), but it also registers a default so I don't get (1).

Unfortunately, this is not a use case Autofac addresses.
I might suggest that there's a logic flaw in the design anyway. If a developer can't resolve a single T without a key, but they can resolve all the T without keys, then folks will simply work around the problem by resolving all of them and manually choosing the one they want. I would recommend revisiting the requirements so you don't have the situation you describe.

Related

Caching constructor selection instead of lamda conversion

We have a very large number of autofac resolutions that occur in our application. We've always had a very long time (about 50% of our web requests) of processing belonging to the autofac resolution stage.
It came to our attention how very important switching to Register Frequently-Used Components with Lambdas is from the guide, and our testing shows it could bring extreme gains for us.
Convert from this
builder.RegisterType<Component>();
to
builder.Register(c => new Component());
It seems the procedure to reflect to find the largest constructor, scan the properties, determine if autofac can resolve that, if not move on to the next smallest, etc. So specifying the constructor directly gives us massive improvements.
My question is can there be or should there be some sort of caching for the found constructor? Since the containers are immutable you can't add or subtract registrations so would require a different constructor later on.
I just wanted to check before we start working on switching over a lot of registrations to lambda, we have 1550 and will try to find the core ones.
Autofac does cache a lot of what it can, but even with container immutability, there's...
Registration sources: dynamic suppliers of registrations (that's how open generics get handled, for example)
Child lifetime scopes: the root container might not have the current HttpRequestMessage for the active WebAPI request, but you can add and override registrations in child scopes so the request lifetime does have it.
Parameters on resolve: you can pass parameters to a resolve operation that will get factored into that operation.
Given stuff like that, caching is a little more complicated than "find the right constructor once, cache it, never look again."
Lambdas will help, as you saw, so look into that.
I'd also look at lifetime scopes for components. If constructor lookup is happening a lot, it means you're constructing a lot, which means you're allocating a lot, which is going to cost you other time. Can more components be singletons or per-lifetime-scope? Using an existing instance will be cheaper still.
You also mentioned you have 1550 things registered. Do all of those actually get used and resolved? Don't register stuff that doesn't need to be. I've seen a lot of times where folks do assembly scanning and try to just register every type in every assembly in the whole app. I'm not saying you're doing this, but if you are, don't. Just register the stuff you need.
Finally, think about how many constructors you have. If there's only one (recommended) it'll be faster than trying many different ones. You can also specify the constructor with UsingConstructor during registration to force a choice and bypass searching.

Can I use claims to secure EF fields using PostSharp?

It it possible to use claims based permissions to secure EF fields using post sharp. We have a multi-tenanted app that we are moving to claims and also have issues of who can read/write to what fields. I saw this but it seems role based http://www.postsharp.net/aspects/examples/security.
As far as I can see it would just be a case of rewriting the ISecurable part.
We were hoping to be able to decorate a field with a permission and silently ignore any write to if if the user did not have permission. We were also hopping that if they read it we could swap in another value e.g. Read salary and get back 0 if you don't have a claim ReadSalary.
Are these standard sort of things to do I've never done any serious AOP. So just wanted a quick confirmation before I mention this as an option.
Yes, it is possible to use PostSharp in this case and it should be pretty easy to convert given example from RBAC to claims based.
One thing that has to be considered is performance. When decorated fields are accessed frequently during processing an use-case (e.g. are read inside a loop) then a lot of time is wasted in redundant security checks. Decorating a method that represent an end-user's use-case would be more appropriate.
I would be afraid to silently swapping values of fields when user has insufficient permission. It could lead to some very surprising results when an algorithm is fed by artificial not-expected data.

How to extend the Java API to be able to introduce new annotations [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Can you explain me how I can extend or change the JAVA-API to using two new Annotations #any and #option to allow Multiplicies in Java?
The main idea for the Multiplicities is the follow:
Multiplicities help to solve many maintenance problems, when we change a to-many-relationship into a to-one-relationship or vice-versa.
I would like to use the above annotations for "fields", "parameter methods" and "return parameters".
For example:
class MyClass {
String #any name; // instead of using List<String>, i would like to use #any
public String setFullname(String #option name) { // #option is another multiplicity
...
}
}
To allow this definition, i have to change the JAVA-API and extends it with these two annotations, but I don't know how to do this.
Can you tell me how to change the API, and which steps I must follow to achieve my requirements?
Please look at this paper to understand the issue.
As explained in that paper, using multiplicities for building to-many relationship causes number of problems:
"It makes maintenance tedious and error-prone."
<< If the maintenance of a program requires a change of a relationship from to-one to to-many (or vice versa), nearly every occurrence of the variable representing this relationship in the program needs to be touched. In a language with static type checking, these occurrences
will be identified by the compiler (as type errors) after the declaration of the field has been changed so that at least, no use is forgotten; in a language without it, the change is extremely error-prone>>
"It changes conditions of subtyping"
<< If S is a subtype of T, an expression of type S can be assigned to a variable (representing a to-one relationship) of type T. When the relationship is upgraded to to-many and the types of the expression and variable are changed to Collection and Collection to reflect this, the assignment is no longer well-typed [18]. To fix this, the use of a (former to-one and now tomany) variable must be restricted to retrieving elements of its collection, which may require substantial further code changes. We consider this dependency of
subtyping on multiplicity to be awkward.>>
"It changes call semantics"
Yet another manifestation of the discontinuity is that when a variable holding a related object is used as an actual parameter of a method
call with call-by-value semantics, the method cannot change the value of the variable (i.e., to which object the variable points), and thus cannot change which object
the variable’s owner is related to. By contrast, when the variable holds a collection of related objects, passing this variable by-value to a method allows the
method to add and remove from the collection, and thus to change which objects the variable’s owner is related to, effectively giving the call by-reference semantics.
We consider this dependency of semantics on multiplicity to be awkward. Surely, there is an easy fix to all problems stemming from the noted discontinuity:
implement to-one relationships using containers also. For instance, the Option class in Scala has two subclasses, Some and None, where Some wraps an object of type
E with an object of type Option, which can be substituted by None, so that the object and no object have a uniform access protocol (namely that of Option). By making Option
implement the protocol of Collection, the above noted discontinuity will mostly disappear. However, doing so generalizes the problems of collections that stem from
putting the content over the container. Specifically:
"Related objects have to be unwrapped before they can be used".
Using containers for keeping related objects, the operations executable on a variable representing the relationship are the operations of the container and not of the related objects. For instance, if cookies have the operation beNibbled(), the same operation can
typically not be expected from a collection of cookies (since Collection is a general
purpose class).
"It subjects substitutability to the rules of subtyping of generics". While the difference in subtyping between to-one and to-many variables (item 2 above) has been
removed, the wrong version has survived: now, a to-one relationship with target type T, implemented as a field having type Option, cannot relate to an object of T’s subtype S (using Option, unless restrictions regarding replacement of the object are accepted).
"It introduces an aliasing problem".
While aliasing is a general problem of objectoriented programming (see, e.g., [11, 19]), the use of container objects to implement relationships introduces the special problem of aliasing the container: when two objects share the same container, the relationship of one object cannot evolve differently from that of the other. This may however not model the domain correctly, and can lead to subtle programming errors.
"Expressive poverty".
More generally, given only collections it is not possible to
express the difference between “object a has a collection, which contains objects b1 through bn” and “object a has objects b1 through bn”. While one might maintain that the former is merely the object-oriented way of representing the latter, and that the used collection is merely an implementation object, it could be the case that the collection is actually a domain object (as which it could even have aliases; cf. above). In object-oriented modelling, by contrast, collections serving as implementation classes are abstracted from by specifying multiplicities larger than 1 (possibly complemented by constraints on the type of the collection, i.e., whether it is ordered, has duplicates, etc.). A collection class in a domain model is therefore always a domain class.
The following figure highlights these problems using a sample program from the internet service provider domain.
http://infochrist.net/coumcoum/Multiplicities.png
Initially, a customer can have a single email account which, according to the price plan selected, is either a POP3 or an IMAP account. Accounts are created by a factory (static method Account.make, Figure 1 left, beginning in line 4) and, for reasons of symmetry, are also deleted by a static method (Account.delete; line 19); due to Java’s lack of support for calling by reference (or out parameters), however, delete does not work as expected. Therefore, resetting of the field account to null has been replicated in the method Customer.transferAccount (line 40). When the program is upgraded to support multiple accounts per customer, the first change is to alter the type of account to List (Figure 1 right, line 30). As suggested by above Problem 1, this entails a number of changes. In class Customer it requires the introduction of an iteration over all accounts (line 35), and the replacement of the receiver of the method set, account, with the iteration variable a (Problem 4). In class Account, make must be changed to return a list of accounts (line 4), and the construction of accounts (lines 7 and 12) must be replaced by the construction of lists that contain a single account of the appropriate type. Admittedly, making the Account factory return a list seems awkward; yet, as we will see, it only demonstrates Problem 7. Also, it brings about a change in the conditions of subtyping (Problem 2): for make to be well-typed (which it is not in Figure 1, right), its return type would either have to be changed to List (requiring a corresponding change of the type of Customer.account, thus limiting account’s use to read access; Problem 5), or the created lists would need to be changed to element type Account. The parameter type of Account.delete needs to be changed to List also; replacing the assignment
of null with clearing the list (line 20) to better reflect the absence of an account (cf. the above discussions on the different meanings of null) makes delete work as intended, which may however change the semantics of a program actually calling delete (Problem 3). An analogous change from assigning null to calling clear() in class Account, line 40, introduces a logical error, since the transferred account is accidentally cleared as well (Problem 6).
The solution is to use multiplicities, as followed (look at the comment below for the image):
The question is now, how can I implement multiplicities in Java?
You are confused about what API means. To implement this idea, you would need to edit the source code of the Java compiler, and what you would end up with would no longer be Java, it would be a forked version of Java, which you would have to call something else.
I do not think this idea has much merit, to be honest.
It's unclear why you think this would solve your problem, and using a non-standard JDK will -- in fact -- give you an even greater maintenance burden. For example, when there are new versions of the JDK, you will need to apply your updates to the new version as well when you upgrade. Not to mention the fact that new employees you hire will not be familiar with your language that deviates from Java.
Java does allow one to define custom annotations:
http://docs.oracle.com/javase/1.5.0/docs/guide/language/annotations.html
... and one can use reflection or annotation processors to do cool things with them. However, annotations cannot be used to so drastically change the program semantics (like magically making a String mean a List of strings, instead) without forking your own version of the JDK, which is a bad idea.

How to accumulate errors in a functional way upon validating database object?

I have Product case class, which is returned by DAO layer (using Salat). User who is creating a product first time status of the product remains as "draft" where no field (of product) is mandatory.
What are the best functional ways to validate 10 of product's attributes, accumulate all validation errors into a single entity and then pass all errors at once in a JSON format to front end?
I assume the core of the question is how to accumulate errors--JSON formatting is a separate issue and does not depend upon how you have collected your errors.
If it's really just a validation issue, you can have a series of methods
def problemWithX: Option[String] = ...
which return Some(errorMessage) if they are invalid or None if they're okay. Then it's as simple as
List(problemWithX, problemWithY, ...).flatten
to create a list of all of your errors. If your list is empty, you're good to go. If not, you have the errors listed. Creating some sensible error report is the job of the problemWithX method--and of course you need to decide whether merely a string or more complex information is necessary. (You may even need to define a Invalid trait and have classes extend it to handle different conditions.)
This is exactly what ScalaZ's Validation type is for.

Why does Optional not extend Supplier

I used Supplier quite often and I'm looking at new Guava 10 Optional now.
In contrast to a Supplier an Optional guarantees never to return null but will throw an IllegalStateException instead. In addition it is immutable and thus it has a fixed known value once it is created. In contrast to that a Supplier may be used to create different or lazy values triggered by calling get() (but it is not imposed to do so).
I followed the discussion about why an Optional should not extend a Supplier and I found:
...it would not be a well-behaved Supplier
But I can't see why, as Supplier explicitly states:
No guarantees are implied by this interface.
For me it would fit, but it seems I used to employ Suppliers in a different way as it was originally intended. Can someone please explain to me why an Optional should NOT be used as a Supplier?
Yes: it is quite easy to convert an Optional into a Supplier (and in addition you may choose if the adapted Supplier.get() will return Optional.get() or Optional.orNull())
but you need some additional transformation and have to create new objects for each :-(
Seems there is some mismatch between the intended use of a Supplier and my understanding of its documentation.
Dieter.
Consider the case of
Supplier<String> s = Optional.absent();
Think about this. You have a type containing one method, that takes no arguments, but for which it's a programmer error to ever invoke that method! Does that really make sense?
You'd only want Supplierness for "present" optionals, but then, just use Suppliers.ofInstance.
A Supplier is generally expected to be capable of returning objects (assuming no unexpected errors occur). An Optional is something that explicitly may not be capable of returning objects.
I think "no guarantees are implied by this interface" generally means that there are no guarantees about how it retrieves an object, not that the interface need not imply the ability to retrieve an object at all. Even if you feel it is OK for a Supplier instance to throw an exception every time you call get() on it, the Guava authors do not feel that way and choose to only provide suppliers that can be expected to be generally well-behaved.