In my current code (Java), I'm doing some custom annotation processing using class-level Java annotations i.e. the annotations are #java.lang.annotation.Target({ElementType.TYPE}).
The target classes contain only static utility methods, so I used file-scoped functions in Kotlin. How do I add these annotations to the generated Kt class?
In Java:
// Utils.java
package com.example;
#MyCustomAspect
public void Utils {
public static void doStuff() {
System.out.println("Hello";
}
}
Now in Kotlin:
// Utils.kt
package com.example;
// ??? #MyCustomAspect ???
fun doStuff() {
System.out.println("Hello";
}
You can use AnnotationTarget.FILE to allow for Kotlin defined annotation to target the Kt class generated from a .kt file. Java defined annotation with target ElementType.TYPE can also be used to target Kotlin file class:
#file:MyCustomAspect
package org.example
#Target(AnnotationTarget.FILE)
annotation class MyCustomAspect
fun doStuff(){
}
Related
Often people ask AspectJ questions like this one, so I want to answer it in a place I can easily link to later.
I have this marker annotation:
package de.scrum_master.app;
import java.lang.annotation.Inherited;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
#Inherited
#Retention(RetentionPolicy.RUNTIME)
public #interface Marker {}
Now I annotate an interface and/or methods like this:
package de.scrum_master.app;
#Marker
public interface MyInterface {
void one();
#Marker void two();
}
Here is a little driver application which also implements the interface:
package de.scrum_master.app;
public class Application implements MyInterface {
#Override
public void one() {}
#Override
public void two() {}
public static void main(String[] args) {
Application application = new Application();
application.one();
application.two();
}
}
Now when I define this aspect, I expect that it gets triggered
for each constructor execution of an annotated class and
for each execution of an annotated method.
package de.scrum_master.aspect;
import de.scrum_master.app.Marker;
public aspect MarkerAnnotationInterceptor {
after() : execution((#Marker *).new(..)) && !within(MarkerAnnotationInterceptor) {
System.out.println(thisJoinPoint);
}
after() : execution(#Marker * *(..)) && !within(MarkerAnnotationInterceptor) {
System.out.println(thisJoinPoint);
}
}
Unfortunately the aspect prints nothing, just as if class Application and method two() did not have any #Marker annotation. Why does AspectJ not intercept them?
The problem here is not AspectJ but the JVM. In Java, annotations on
interfaces,
methods or
other annotations
are never inherited by
implementing classes,
overriding methods or
classes using annotated annotations.
Annotation inheritance only works from classes to subclasses, but only if the annotation type used in the superclass bears the meta annotation #Inherited, see JDK JavaDoc.
AspectJ is a JVM language and thus works within the JVM's limitations. There is no general solution for this problem, but for specific interfaces or methods you wish to emulate annotation inheritance for, you can use a workaround like this:
package de.scrum_master.aspect;
import de.scrum_master.app.Marker;
import de.scrum_master.app.MyInterface;
/**
* It is a known JVM limitation that annotations are never inherited from interface
* to implementing class or from method to overriding method, see explanation in
* JDK API.
* <p>
* Here is a little AspectJ trick which does it manually.
*
*/
public aspect MarkerAnnotationInheritor {
// Implementing classes should inherit marker annotation
declare #type: MyInterface+ : #Marker;
// Overriding methods 'two' should inherit marker annotation
declare #method : void MyInterface+.two() : #Marker;
}
Please note: With this aspect in place, you can remove the (literal) annotations from the interface and from the annotated method because AspectJ's ITD (inter-type definition) mechanics adds them back to the interface plus to all implementing/overriding classes/methods.
Now the console log when running the Application says:
execution(de.scrum_master.app.Application())
execution(void de.scrum_master.app.Application.two())
By the way, you could also embed the aspect right into the interface so as to have everything in one place. Just be careful to rename MyInterface.java to MyInterface.aj in order to help the AspectJ compiler to recognise that it has to do some work here.
package de.scrum_master.app;
public interface MyInterface {
void one();
void two();
// Cannot omit 'static' here due to https://bugs.eclipse.org/bugs/show_bug.cgi?id=571104
public static aspect MarkerAnnotationInheritor {
// Implementing classes should inherit marker annotation
declare #type: MyInterface+ : #Marker;
// Overriding methods 'two' should inherit marker annotation
declare #method : void MyInterface+.two() : #Marker;
}
}
Update 2021-02-11: Someone suggested an edit to the latter solution, saying that the aspect MarkerAnnotationInheritor nested inside interface MyInterface is implicitly public static, so the modifiers in the aspect declaration could be omitted. In principle this is true, because members (methods, nested classes) of interfaces are always public by default and a non-static inner class definition would not make sense inside an interface either (there is no instance to bind it to). I like to be explicit in my sample code, though, because not all Java developers might know these details.
Furthermore, currently the AspectJ compiler in version 1.9.6 throws an error if we omit static. I have just created AspectJ issue #571104 for this problem.
I have a question regarding the usage of proguard together with a scala aws lambda function. I have created a very simple aws lambda function like this:
package example
import scala.collection.JavaConverters._
import com.amazonaws.services.lambda.runtime.events.S3Event
import com.amazonaws.services.lambda.runtime.Context
object Main extends App {
def kinesisEventHandler(event: S3Event, context: Context): Unit = {
val result = event.getRecords.asScala.map(m => m.getS3.getObject.getKey)
println(result)
}
}
I have imported the following packages:
"com.amazonaws" % "aws-lambda-java-core" % "1.1.0"
"com.amazonaws" % "aws-lambda-java-events" % "1.3.0"
When I create a fat jar it is 13 MB in size and works like expected as an AWS Lambda function (only for test output).
13 MB is very big and so I tried proguard to shrink the jar, but it isn't working and I always get problems and after two days, I have no more ideas how to solve that.
Here is my proguard configuration:
-injars "/Users/x/x/x/AWS_Lambda/target/scala-2.12/lambda-demo-assembly-1.0.jar"
-libraryjars "/Users/x/x/x/AWS_Lambda/lib_managed/jars/org.scala-lang/scala-library/scala-library-2.12.1.jar"
-libraryjars "/Users/x/x/x/AWS_Lambda/lib_managed/jars/com.amazonaws/aws-lambda-java-core/aws-lambda-java-core-1.1.0.jar"
-libraryjars "/Library/Java/JavaVirtualMachines/jdk1.8.0_102.jdk/Contents/Home/jre/lib/rt.jar"
-libraryjars "/Users/x/x/x/AWS_Lambda/lib_managed/jars/com.amazonaws/aws-java-sdk-s3/aws-java-sdk-s3-1.11.0.jar"
-libraryjars "/Users/x/x/x/AWS_Lambda/lib_managed/jars/com.amazonaws/aws-lambda-java-events/aws-lambda-java-events-1.3.0.jar"
-outjars "/Users/x/x/x/AWS_Lambda/target/scala-2.12/proguard/lambda-demo_2.12-1.0.jar"
-dontoptimize
-dontobfuscate
-dontnote
-dontwarn
-keepattributes SourceFile,LineNumberTable
# Preserve all annotations.
-keepattributes *Annotation*
# Preserve all public applications.
-keepclasseswithmembers public class * {
public static void main(java.lang.String[]);
}
# Preserve some classes and class members that are accessed by means of
# introspection.
-keep class * implements org.xml.sax.EntityResolver
-keepclassmembers class * {
** MODULE$;
}
-keepclassmembernames class scala.concurrent.forkjoin.ForkJoinPool {
long eventCount;
int workerCounts;
int runControl;
scala.concurrent.forkjoin.ForkJoinPool$WaitQueueNode syncStack;
scala.concurrent.forkjoin.ForkJoinPool$WaitQueueNode spareStack;
}
-keepclassmembernames class scala.concurrent.forkjoin.ForkJoinWorkerThread {
int base;
int sp;
int runState;
}
-keepclassmembernames class scala.concurrent.forkjoin.ForkJoinTask {
int status;
}
-keepclassmembernames class scala.concurrent.forkjoin.LinkedTransferQueue {
scala.concurrent.forkjoin.LinkedTransferQueue$PaddedAtomicReference head;
scala.concurrent.forkjoin.LinkedTransferQueue$PaddedAtomicReference tail;
scala.concurrent.forkjoin.LinkedTransferQueue$PaddedAtomicReference cleanMe;
}
# Preserve some classes and class members that are accessed by means of
# introspection in the Scala compiler library, if it is processed as well.
#-keep class * implements jline.Completor
#-keep class * implements jline.Terminal
#-keep class scala.tools.nsc.Global
#-keepclasseswithmembers class * {
# <init>(scala.tools.nsc.Global);
#}
#-keepclassmembers class * {
# *** scala_repl_value();
# *** scala_repl_result();
#}
# Preserve all native method names and the names of their classes.
-keepclasseswithmembernames,includedescriptorclasses class * {
native <methods>;
}
# Preserve the special static methods that are required in all enumeration
# classes.
-keepclassmembers,allowoptimization enum * {
public static **[] values();
public static ** valueOf(java.lang.String);
}
# Explicitly preserve all serialization members. The Serializable interface
# is only a marker interface, so it wouldn't save them.
# You can comment this out if your application doesn't use serialization.
# If your code contains serializable classes that have to be backward
# compatible, please refer to the manual.
-keepclassmembers class * implements java.io.Serializable {
static final long serialVersionUID;
static final java.io.ObjectStreamField[] serialPersistentFields;
private void writeObject(java.io.ObjectOutputStream);
private void readObject(java.io.ObjectInputStream);
java.lang.Object writeReplace();
java.lang.Object readResolve();
}
# Your application may contain more items that need to be preserved;
# typically classes that are dynamically created using Class.forName:
# -keep public class mypackage.MyClass
# -keep public interface mypackage.MyInterface
# -keep public class * implements mypackage.MyInterface
-keep,includedescriptorclasses class example.** { *; }
-keepclassmembers class * {
<init>(...);
}
When I run this my jar is very small (around 5 MB), but when I launch the lambda I get the following error
"errorMessage": "java.lang.NoSuchMethodException: com.amazonaws.services.s3.event.S3EventNotification.parseJson(java.lang.String)",
"errorType": "lambdainternal.util.ReflectUtil$ReflectException"
I had a look at the class and proguard deleted this function. When I changed the config to also keep this file, I get another problem in another file.
Does somebody has already used proguard with a scala AWS lambda function and has a good setting or knows about this problem? Is there any other good solution to shrink the jar size?
Best,
Lothium
Honestly, 13MB isn't that big. But, as much as I'm sure that this is going to be considered heresy to a Scala developer, I created an equivalent method in Java and it's a bit over 7MB. I didn't try to use Proguard on it - it may shrink further.
That was with the S3Event package as you're using. If you look at what gets included because of that package it brings in tons of extra stuff - SQS, SNS, Dynamo and so on. Ultimately that is the biggest part. I did a little test to try to eliminate all libraries except for aws-lambda-java-core and instead used JsonPath. That got my jar file to 458K.
My code is below. I know it's not Scala but perhaps you can get some ideas from it. The key was eliminating as many AWS libraries as possible. Of course, if you want to do anything more than print keys in your Lambda you'll need to bring in more AWS libraries which, again, makes the size about 7MB.
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.List;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;
import com.jayway.jsonpath.JsonPath;
public class S3EventLambdaHandler implements RequestStreamHandler {
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) {
try {
List<String> keys = JsonPath.read(inputStream, "$.Records[*].s3.object.key");
for( String nextKey: keys )
System.out.println(nextKey);
}
catch( IOException ioe ) {
context.getLogger().log("caught IOException reading input stream");
}
}
}
I am migrating from org.apache.felix.scr annotations to org.osgi.service.component annotations. I have a set of Components that inherit from a common abstract class. In the felix case, I can use a #Component annotation with the option componentAbstract=true on the super class, and then use #Reference annotation in the super class. I cannot find how to migrate this to osgi annotations.
Is it possible to use Component annotations in a super class of a Component? And if so, what is then the appropriate way to handle the properties and metatype generation?
So, what I am looking for, is something like this
/* No component definition should be generated for the parent, as it is
abstract and cannot be instantiated */
#Component(property="parent.property=parentValue")
public abstract class Parent {
#Reference
protected Service aService;
protected activate(Map<String,Object> props) {
System.out.println("I have my parent property: "+props.get("parent.property"));
#Override
public abstract void doSomething();
}
/* For this class, the proper Component definition should be generated, also
including the information coming from the annotations in the parent */
#Component(property="child.property=childValue")
public class Child extends Parent {
#Activate
public activate(Map<String,Object> props) {
super.activate(props);
System.out.println("I have my child property: "+props.get("child.property"));
}
public void doSomething() {
aService.doSomething();
}
}
By default BND will not process DS annotations in parent classes. You can change that with -dsannotations-options: inherit but please see http://enroute.osgi.org/faq/ds-inheritance.html why you shouldn't!
2021-02-23 UPDATE: It seems like the page mentioned above is no longer available. I don't know if it was moved elsewhere or simply removed but its content (in Markdown format) is still available on GitHub: https://github.com/osgi/osgi.enroute.site/blob/pre-R7/_faq/ds-inheritance.md
My app has hierarchy level class objects as follows.
Package com.sample.folder1;
Public class ParentClass{ }
Package com.sample.folder2;
Public class Childclass1 extends ParentClass{ }
Unit piece under test package:
#Test
Public void testMocking{
Childclass1 obj = Powermockito.mock(Chilclass1.class);
}
When I execute above junit in eclipse it throws
"VerifyError: Inconsistent stackmap frames....."
Please suggest mocking on hierarchy classes on same and different packages.
Try to add -noverify (Java 8) or -XX:-UseSplitVerifier (Java 7) as vm parameter.
Dagger 1's plus() method is something I used quite often in previous applications, so I understand situations where you might want to have a subcomponent with full access to the parent graphs bindings.
In what situation would it be beneficial to use a component dependency instead of a subcomponent dependency and why?
Component dependencies - Use this when you want to keep two components independent.
Subcomponents - Use this when you want to keep two components coupled.
I will use the below example to explain Component dependencies and Subcomponents. Some points worth noticing about the example are:
SomeClassA1 can be created without any dependency. ModuleA provides and instance of SomeClassA1 via the provideSomeClassA1() method.
SomeClassB1 cannot be created without SomeClassA1. ModuleB can provide an instance of SomeClassB1 only if an instance of SomeClassA1 is passed as an argument to provideSomeClassB1() method.
#Module
public class ModuleA {
#Provides
public SomeClassA1 provideSomeClassA1() {
return new SomeClassA1();
}
}
#Module
public class ModuleB {
#Provides
public SomeClassB1 provideSomeClassB1(SomeClassA1 someClassA1) {
return new SomeClassB1(someClassA1);
}
}
public class SomeClassA1 {
public SomeClassA1() {}
}
public class SomeClassB1 {
private SomeClassA1 someClassA1;
public SomeClassB1(SomeClassA1 someClassA1) {
this.someClassA1 = someClassA1;
}
}
Dagger will take care of passing the instance of SomeClassA1 as an argument to provideSomeClassB1() method on ModuleB whenever the Component/Subcomponent declaring ModuleB is initialized. We need to instruct Dagger how to fulfill the dependency. This can be done either by using Component dependency or Subcomponent.
Component dependency
Note the following points in the Component dependency example below:
ComponentB has to define the dependency via the dependencies method on #Component annotation.
ComponentA doesn't need to declare ModuleB. This keeps the two components independent.
public class ComponentDependency {
#Component(modules = ModuleA.class)
public interface ComponentA {
SomeClassA1 someClassA1();
}
#Component(modules = ModuleB.class, dependencies = ComponentA.class)
public interface ComponentB {
SomeClassB1 someClassB1();
}
public static void main(String[] args) {
ModuleA moduleA = new ModuleA();
ComponentA componentA = DaggerComponentDependency_ComponentA.builder()
.moduleA(moduleA)
.build();
ModuleB moduleB = new ModuleB();
ComponentB componentB = DaggerComponentDependency_ComponentB.builder()
.moduleB(moduleB)
.componentA(componentA)
.build();
}
}
SubComponent
Note the following points in the SubComponent example:
As ComponentB has not defined the dependency on ModuleA, it cannot live independently. It becomes dependent on the component that will provide the ModuleA. Hence it has a #Subcomponent annotation.
ComponentA has declared ModuleB via the interface method componentB(). This makes the two components coupled. In fact, ComponentB can only be initialized via ComponentA.
public class SubComponent {
#Component(modules = ModuleA.class)
public interface ComponentA {
ComponentB componentB(ModuleB moduleB);
}
#Subcomponent(modules = ModuleB.class)
public interface ComponentB {
SomeClassB1 someClassB1();
}
public static void main(String[] args) {
ModuleA moduleA = new ModuleA();
ComponentA componentA = DaggerSubComponent_ComponentA.builder()
.moduleA(moduleA)
.build();
ModuleB moduleB = new ModuleB();
ComponentB componentB = componentA.componentB(moduleB);
}
}
According to the documentation:
Component Dependency gives you access to only the bindings exposed as provision methods through component dependencies, i.e. you have access to only types which are declared in parent Component.
SubComponent gives you an access to the entire binding graph from its parent when it is declared, i.e. you have an access to all objects declared in its Modules.
Let's say, you have an ApplicationComponent containing all Android related stuff (LocationService, Resources, SharedPreference, etc). You also want to have your DataComponent where you manage things for persistence along with WebService to deal with APIs. The only thing you lack in DataComponent is Application Context which resides in ApplicationComponent. The simplest way to get a Context from DataComponent would be a dependency on ApplicationComponent. You need to be sure you have a Context explicitly declared in ApplicationComponent because you only have access to declared stuff. In this case, there is no manual work, meaning you don't need to specify Submodules in parent Component and explicitly add your submodule to a parent module like:
MySubcomponent mySubcomponent = myComponent.plus(new ChildGraphModule("child!")); // No need!
Now consider that case where you want to inject WebService from DataComponent and LocationService from ApplicationComponent into your Fragment which binds using the #Submodule plus feature above. The cool thing here is that the component you're binding to (ApplicationComponent) does not need to expose WebService nor LocationService because you have access to the entire graph right away.
Here is the code example with screenshot for more understanding of Component and SubComponent:
Component:
AppComponent contains two declarations.
AppComponent initializes into App class.
HomeActivityComponent is dependent upon AppComponent.
In HomeActivity on initialization of DaggerHomeActivityComponent, I am giving AppComponent object as a composition.
SubComponent:
AppComponent contains SubComponent or SubComponents.
AppComponent initializes into App class.
SubComponent doesn’t know about his ParentComponent. That only providing its own dependencies by including Module.
In HomeActivity I am injecting SubComponent by using its Parent Component.
And the Pictorial Diagram:
Source: link
One other thing that I didn't quite realize until now is that:
A #Subcomponent instance has exactly one parent component (although different components can instantiate that same #Subcomponent and be that instance's parent)
A #Component may have zero, one, or many "parent" components declared through component dependencies