I am trying to get Dagger up an working on my project.
However I get the following exception on one of my classes during compilation:
error: No injectable members on Foo. Do you want to add an injectable constructor?
However, the class have no dependencies and as such uses the default no-arg constructor:
public class Foo
{
...
}
Do I really have to add an injectable no-arg constructor like below?
public class Foo
{
#Inject
public Foo()
{
}
....
}
From the docs:
Classes that lack #Inject annotations cannot be constructed by Dagger.
Dagger actively requires you to add #Inject to your injectable class, either by adding a no-args constructor, or adding an injectable field. The third option is to return the class from an #Provides method like so:
#Module(...)
class MyModule {
#Provides Foo provideFoo() {
return new Foo(); // Foo is not injectable.
}
}
This does seem like extra boilerplate, but from experience with Guice and other frameworks, JIT binding of random classes turns out to be rife with error. We have seen java.lang.String injected into things, and because someone forgot to bind it, you ended up with "" injected instead of the desired string. Dagger, therefore, requires an #Inject constructor, or field.(Guice optionally has this in 4.x, though for backwards compatibility, it is off by default)
This is one rare instance where Dagger has chosen more correctness guarantees at the cost of some small amount of verbosity.
Related
Is there a compiler meta for Class declaration, that prevents creating Class instance before extending it? In other words - some sort of opposite of #:final meta.
Like so (last line of code):
class A {
// ...
}
class B extends A {
// ...
}
// ...
var b = new B(); // OK
var a = new A(); // induce compiler-error
Simply don't declare a constructor at all for class A
Both the other answers are correct (no constructor or private constructor), but there are a few more details that you may interest you:
Here's an example of no constructor. Of note is that A simply doesn't have a constructor, and B simply doesn't call super(). Other than that, everything else works as you'd expect.
Here's an example of a private constructor. You still can't instantiate a new A(), but you do still need to call super() from B's constructor.
Technicalities:
Use of some features (like a default value on a member variable) will cause A to get an implicit constructor, automatically. Don't worry, this doesn't affect constructability or whether you need to call super(). But know that it is there, and if necessary an implicit super() call is prepended to B's constructor. See the JS output to verify this.
In any case, know that you can still instantiate an A at runtime with var a = Type.createInstance(A,[]); as compile-time type checks do not limit RTTI.
Related discussion:
Aside from private/no constructor, Haxe doesn't have a formal notion of abstract classes1 (base classes not expected to be instantiated) or abstract methods2 (functions on abstract base classes with no implementation that must be implemented by a derived class.) However, Andy Li wrote a macro for enforcing some of those concepts if you use them. Such a macro can detect violations of these rules and throw compile-time errors.
1. Not to be confused with Haxe abstracts types, which are an entirely different topic.
2. Not to be confused with virtual functions, which wikipedia describes as a function which can be overridden (though various docs for various languages describe this highly loaded term differently.)
One way of achieving this is to create private Class constructor:
class A {
private function new() {
// ...
}
}
// ...
var a = new A(); // Error: Cannot access private constructor
I just start learning what is Dependency Injection and InversionOfControll is. But I cant get one thing. The interface injection is onle when I define some interface where describe method what need to be realized. And that method gets instance of some class as parameter, and then In class what implements interface just describe body of this method ?
An interface is only a contract that defines what public members a class should implement. It does not control the actual implementation - you need a concrete class to do that.
// This is only a contract that defines what members
// all concrete types must implement.
public interface ISomeType
{
void DoSomething();
}
// This class implements the interface. Therefore, it must
// have all of the methods the contract specifies. In some
// languages, this can be done implicitly just by adding the
// member, but it usually must be public.
public class SomeType : ISomeType
{
public void DoSomething()
{
Console.WriteLine("Hello World");
}
}
When you make a class implement an interface it implicitly means that instances of the class can be cast to the interface type.
ISomeType x = new SomeType();
Dependency Injection takes advantage of this behavior. You typically define both the interface type and the concrete implementation together in a mapping.
container.For<ISomeType>().Use<SomeType>();
Then when a service is declared to take ISomeType as a constructor argument, the map is used to determine which concrete type to create an instance of.
public class SomeService : ISomeService
{
private readonly ISomeType someType;
public SomeService(ISomeType someType)
{
if (someType == null) throw new ArgumentNullException("someType");
this.someType = someType;
}
}
The recommended way is to allow the DI container to do this implicitly when your entire object graph is composed (in the Composition Root), but it is possible also to do it explicitly (and it makes a better example):
ISomeService = container.GetInstance<ISomeService>();
Assuming that the container was configured to map ISomeService to SomeService (like I showed before with ISomeType), this one line of code will create an instance of SomeService and automatically inject an instance of SomeType into its constructor.
It is difficult to see the point in a simple example, though. Dependency Injection is meant for complex applications with many types. It simplifies things when the application is complex, but when the application is simple it has a tendency to make things more complex.
I am confused regarding backward compatibility when adding a method with default implementation to a trait. Like:
Previous Version
trait Foo
New Version
trait Foo {
def verifyConsistency: Option[String] = ??? // provide default implementation
}
The Migration Manager reports this addition as a binary incompatibility. Is that correct?
Well yes it is correct.
When you define trait Foo, it will under the hood create both a (JVM) interface Foo and a (JVM) class Foo$class with all the method implementations defined as static methods. The corresponding java code would look like something like this (for your new defintion of Foo):
interface Foo {
Option<String> verifyConsistency();
}
class Foo$class {
static Option<String> verifyConsistency(Foo self) {
Predef.???();
}
}
When you mix Foo into a concrete class Bar, what happens at the JVM level is that Bar extends the interface Foo, and it implements method verifyConsistency by simply forwarding the call to Foo$class:
class Bar implements Foo {
Option<String> verifyConsistency() {
return Foo$class.verifyConsistency(this); // simple forwarding
}
}
The reason why it is done this way is that the JVM object model does not support multiple inheritance. The traits implementations cannot be simply put in classes that you would extend from, because you can only ever extend a single class on the JVM.
The take away of this situation is that everytime a concrete class mixes a trait, the class defines "stub" methods for each member of the trait (those methods simply forward to the actual implementation, which is a static method).
One consequence is that if you add a new method to a trait, even if you define an implementation it is not enough: concrete classes that mix the trait need to be recompiled (so that a stub for the new method is added to the class). If you don't recompile those classes, your program will fail to run, as you would now have a class that is supposedly concrete (non abstract) AND extend the corresponding interface but actually miss the implementation for the new method.
In your case this means having concrete classes that extend interface Foo but do not have any implementation for verifyConsistency.
Hence the binary incompatibility.
Sorry for the catchy title. ;-)
I want to create a package-private class with a package-private method in Scala, so my class looks somewhat like this:
package net.java.truevfs.ext.pace
import ...
private[pace] abstract class AspectController(controller: FsController)
extends FsDecoratingController(controller) {
private[pace] def apply[V](operation: => V): V
... // lots of other stuff
}
However, if I use javap to check what the Scala compiler effectively creates, I get something like this:
$ javap -classpath target/classes net.java.truevfs.ext.pace.AspectController
Compiled from "AspectController.scala"
public abstract class net.java.truevfs.ext.pace.AspectController extends net.java.truevfs.kernel.spec.FsDecoratingController implements scala.ScalaObject{
public abstract java.lang.Object apply(scala.Function0);
...
}
This means that although the Scala compiler might respect the access restrictions, I could still call this class from any Java code, which is a clear encapsulation violation.
Am I missing something?
Is there a way to make this work as intended?
In addition to #RĂ©gis' answer, the reason Scala compiler doesn't make the class package-private is because by Scala rules it can be accessed from other packages: namely, subpackages of net.java.truevfs.ext.pace. E.g.
package net.java.truevfs.ext.pace.subpackage
import net.java.truevfs.ext.pace.AspectController
class Subclass extends AspectController { ... }
is legal in Scala, but in Java classes from net.java.truevfs.ext.pace.subpackage can't access package-private classes from net.java.truevfs.ext.pace.
You are not missing anything.
Many of the access restricitons in scala have no equivalent in java nor at the jvm level. The additional information is obviously right there in the .class file, but is there as custom annotations that only the scala compiler will interpret.
The scala object model can only partly be matched to the jvm object model, and a java compiler will only see this partial model.
I'd say that the match is pretty close and the scala compiler does a very good job at java interoperability, but nothings's perfect.
Not really a 100% correct answer...
You can make a package object if I want to do some fancy stuff in there with a private class. The package object is accessed like any other package.
The class MyClass is package private to that package object.
It's not package private however.
package object com.jasongoodwin.foo {
private class MyClass
class AnotherClass {
val myClass = new MyClass
}
}
I have a custom deferred binder (rebind implementation) that instantiates objects. I would like to have some dependencies (#Inject annotated setter methods) within the instance returned by GWT.create() fulfilled by GIN. Is this possible?
So, given code such as:
Foo foo = GWT.create(Foo.class);
if foo's final implementation has:
#Inject
public void setBar(Bar bar) {
...
}
how do I get bar injected into the returned foo instance automatically by GIN?
Your Ginjector can have methods added to it for the purpose of injecting objects created in some other way. These must take one argument, and should specify the most specific type possible. For example, if MyViewImpl extends BaseView, and both types have dependencies to inject, but you declare
void injectBaseView(BaseView view);
in your ginjector, only the fields/setters declared on BaseView will be dealt with.
EDIT: Also, if no binding is declared, GWT.create will be used to create an instance, so you can have your cake and eat it to. One exception to that as far as I can recall, is when you want to GWT.create one type, but return another (see RPC interfaces for an example).