I'm building an AWS lambda function with their POJO handler, but abstracting over the RequestHandler interface results in an erased type. When this happens AWS can't cast to the input type of my lambda function:
java.util.LinkedHashMap cannot be cast to com.amazonaws.services.lambda.runtime.events.SNSEvent: java.lang.ClassCastException
java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to com.amazonaws.services.lambda.runtime.events.SNSEvent
The following code works when uploaded to AWS:
import com.amazonaws.services.lambda.runtime._
import com.amazonaws.services.lambda.runtime.events.SNSEvent
// Only working version
class PojoTest1 extends Handler1[SNSEvent]{
override def handleRequest(input: SNSEvent, context: Context): Unit =
println(s"message: ${input.getRecords.get(0).getSNS.getMessage}")
}
trait Handler1[Event] extends RequestHandler[Event, Unit]{
override def handleRequest(input: Event, context: Context): Unit
}
Now because I'm using Scala I'm abstracting away the Java RequestHandler with generic traits. The following is a minified example of what doesn't work:
// Doesn't work
class PojoTest2 extends Handler2[SNSEvent]{
override def act(input: SNSEvent): Unit =
println(s"message: ${input.getRecords.get(0).getSNS.getMessage}")
}
trait Handler2[Event] extends RequestHandler[Event, Unit]{
def act(input: Event): Unit
override def handleRequest(input: Event, context: Context): Unit = act(input)
}
When I run javap PojoTest1.class this is the method that makes everything work:
public void handleRequest(com.amazonaws.services.lambda.runtime.events.SNSEvent, com.amazonaws.services.lambda.runtime.Context);
When I run javap PojoTest2.class You can see from this signature that the type of SNSEvent has been erased to Object:
public void handleRequest(java.lang.Object, com.amazonaws.services.lambda.runtime.Context);
This looks exactly like the issue described in SI-8905. Unfortunately the posted workaround doesn't seem to work either:
// Doesn't work
abstract class Handler3[T] extends Handler2[T]
class PojoTest3 extends Handler3[SNSEvent]{
override def act(input: SNSEvent): Unit =
println(s"message: ${input.getRecords.get(0).getSNS.getMessage}")
}
Even extending an abstract class directly doesn't yield better results:
// Doesn't work
class PojoTest4 extends Handler4[SNSEvent]{
override def act(input: SNSEvent): Unit =
println(s"message: ${input.getRecords.get(0).getSNS.getMessage}")
}
abstract class Handler4[Event] extends RequestHandler[Event, Unit] {
def act(input: Event): Unit
override def handleRequest(input: Event, context: Context): Unit = act(input)
}
When I use javap on any of the classes that don't work I still get the same method signature with the erased type.
I'm using Scala 2.12.7, sbt 1.1.2, and sbt-assembly 0.14.8.
I'm looking for any sort of work around to deal with this.
Note: I don't work for Amazon or Sun/Oracle so parts of the answer is a speculation.
I think there is a fundamental conflict between JVM type erasure, how AWS tries to work it around and what you are trying to do. I also don't think that the bug you referenced is relevant. I think the behavior is the same for Java.
AFAIU from the AWS point of view the problem looks like this: there is a stream of events of different types and a bunch of handlers. You need to decide which events a given handler can handle. The obvious solution is to look at the signature of the handleRequest method and use the argument's type. Unfortunately JVM type system doesn't really supports generics so you have to look for the most specific method (see further) and assume that that method is the real deal.
Now assume you develop a compiler that targets JVM (Scala or Java, further examples will be in Java to show that this is not a Scala-specific issue). Since JVM doesn't support generics you have to erasure your types. And you want to erase them to the most narrow type that covers all possible arguments so you are still type-safe at the JVM level.
For the RequestHandler.handleRequest
public O handleRequest(I input, Context context);
the only valid type erasure is
public Object handleRequest(Object input, Context context);
because I and O are unbound.
Now assume you do
public class PojoTest1 implements RequestHandler<SNSEvent, Void> {
#Override
public Void handleRequest(SNSEvent input, Context context) {
// whatever
return null;
}
}
At this point you say that you have a handleRequest method with this non-generic signature and the compiler has to respect it. But at the same time it has to respect your implements RequestHandler as well. So what the compiler has to do is to add a "bridge method" i.e. to produce a code logically equivalent to
public class PojoTest1 implements RequestHandler {
// bridge-method
#Override
public Object handleRequest(Object input, Context context) {
// call the real method casting the argument
return handleRequest((SNSEvent)input, context);
}
// your original method
public Void handleRequest(SNSEvent input, Context context) {
// whatever
return null;
}
}
Note how your handleRequest is not really an override of the RequestHandler.handleRequest. The fact that you also have Handler1 doesn't change anything. What is really important is that you have an override in your non-generic class so the compiler has to generate a non-generic method (i.e. a method with not erased types) in your final class. Now you have two methods and AWS can understand that the one that takes SNSEvent is the most specific one so it is represents your real bound.
Now assume you do add your generic intermediate class Handler2:
public abstract class Handler2<E> implements RequestHandler<E, Void> {
protected abstract void act(E input);
#Override
public Void handleRequest(E input, Context context) {
act(input);
return null;
}
}
At this point the return type is fixed but the argument is still an unbound generic. So compiler has to produce something like this:
public abstract class Handler2 implements RequestHandler {
protected abstract void act(Object input);
// bridge-method
#Override
public Object handleRequest(Object input, Context context) {
// In Java or Scala you can't distinguish between methods basing
// only on return type but JVM can easily do it. This is again
// call of the other ("your") handleRequest method
return handleRequest(input, context);
}
public Void handleRequest(Object input, Context context) {
act(input);
return null;
}
}
So now when we come to
public class PojoTest2 extends Handler2<SNSEvent> {
#Override
protected void act(SNSEvent input) {
// whatever
}
}
you have overridden act but not handleRequest. Thus the compiler doesn't have to generate a specific handleRequest method and it doesn't. It only generates a specific act. So the generated code looks like this:
public class PojoTest2 extends Handler2 {
// Bridge-method
#Override
protected void act(Object input) {
act((SNSEvent)input); // call the "real" method
}
protected void act(SNSEvent input) {
// whatever
}
}
Or if you flatten the tree and show all (relevant) methods in PojoTest2, it looks like this:
public class PojoTest2 extends Handler2 {
// bridge-method
#Override
public Object handleRequest(Object input, Context context) {
// In Java or Scala you can't distinguish between methods basing
// only on return type but JVM can easily do it. This is again
// call of the other ("your") handleRequest method
return handleRequest(input, context);
}
public Void handleRequest(Object input, Context context) {
act(input);
return null;
}
// Bridge-method
#Override
protected void act(Object input) {
act((SNSEvent)input); // call the "real" method
}
protected void act(SNSEvent input) {
// whatever
}
}
Both of the handleRequest methods accept just Object as a parameter and this is what AWS has to assume. Since you didn't override the handleRequest method in PojoTest2 (and not having to do so is the whole point of your inheritance hierarchy), the compiler didn't produce a more specific method for it.
Unfortunately I don't see any good workaround for this problem. If you want AWS to recognize the bound of the I generic parameter, you have to override handleRequest at the place in hierarchy where this bound becomes really known.
You may try do something like this:
// Your _non-generic_ sub-class has to have the following implementation of handleRequest:
// def handleRequestImpl(input: EventType, context: Context): Unit = handleRequestImpl(input, context)
trait UnitHandler[Event] extends RequestHandler[Event, Unit]{
def act(input: Event): Unit
protected def handleRequestImpl(input: Event, context: Context): Unit = act(input)
}
The benefit of this approach is that you can still put some additional wrapping logic (such as logging) into your handleRequestImpl. But still this will work only by convention. I see no way to force developers to use this code in the correct way.
If the whole point of your Handler2 is just bind the output type O to Unit without adding any wrapping logic, you can just do this without renaming the method to act:
trait UnitHandler[Event] extends RequestHandler[Event, Unit]{
override def handleRequest(input: Event, context: Context): Unit
}
In such way your sub-classes still will have to implement handleRequest with specific types bound to Event and compiler will have to produce specific methods there so the issue will not happen.
As #SergGr said, there are no real generics in the JVM. All types are replaced with their bounds or objects.
This answer has a different take on how to achieve the creation of custom abstract handlers which doesn't involve using the AWS RequestHandler.
The way I have solved this is by using context bounds and ClassTag like this:
abstract class LambdaHandler[TEvent: ClassTag, TResponse<: Any] {
def lambdaHandler(inputStream: InputStream, outputStream: OutputStream, context: Context): Unit = {
val json = Source.fromInputStream(inputStream).mkString
log.debug(json)
val event = decodeEvent(json)
val response = handleRequest(event, context)
// do things with the response ...
outputStream.close()
}
def decodeEvent(json: String): TEvent = jsonDecode[TEvent](json)
}
where jsonDecode is a function that turns the String event to the expected TEvent. In the following example I use json4s but you can use any de/serialization method you want:
def jsonDecode[TEvent: ClassTag](json: String): TEvent = {
val mapper = Mapper.default
jsonDecode(mapper)
}
In the end, you will be able to write functions like this
// AwsProxyRequest and AwsProxyResponse are classes from the com.amazonaws.serverless aws-serverless-java-container-core package
class Function extends LambdaHandler[AwsProxyRequest, AwsProxyResponse] {
def handleRequest(request: AwsProxyRequest, context: Context): AwsProxyResponse = {
// handle request and retun an AwsProxyResponse
}
}
Or custom SNS handlers where TEvent is the custom type of the SNS message:
// SNSEvent is a class from the com.amazonaws aws-lambda-java-events package
abstract class SnsHandler[TEvent: ClassTag] extends LambdaHandler[TEvent, Unit]{
override def decodeEvent(json: String): TEvent = {
val event: SNSEvent = jsonDecode[SNSEvent](json)
val message: String = event.getRecords.get(0).getSNS.getMessage
jsonDecode[TEvent](message)
}
}
If you use this method, straight out of the box, you will quickly realize that there are a large number of edge cases deserializing the JSON payloads because there are inconsistencies in the
types that you get from AWS events. Therefore, you will have to fine tune the jsonDecode method to suit your needs.
Alternatively, use an existing library that takes care of these steps for you. There is one library that I know of for Scala (but have not used) called aws-lambda-scala
or you can take a look at the full implementation of my LambdaHandler in GitHub
You can solve this in a way that doesn't require you to have every sub class call a base class method. Basically the base class needs to be told what the concrete type of the request is at runtime before it can convert it into an object of that type. So create an abstract method that gives it that information:
public abstract class BaseHandler<RequestType, ResponseType> implements RequestHandler<Map<String, Object>, ResponseType> {
#Override
public ResponseType handleRequest(Map<String, Object> request, Context context) {
// Convert the Map to Request Object for the sub classes
final ObjectMapper mapper = new ObjectMapper();
final RequestType requestPojo = mapper.convertValue(request, this.getRequestType());
return this.handle(requestPojo, context);
}
protected abstract ResponseType handle(RequestType request, Context context);
protected abstract Class<RequestType> getRequestType();
}
Then each base class just needs to implement the abstract methods. It isn't exactly ideal as the getRequestType is a strange method to have to have, but I think it is a little cleaner than the other answers here.
Example base class:
public class SubHandler extends BaseHandler<SNSEvent, Void> {
#Override
protected Void handle(SNSEvent input, Context context) {
// Handler implementation goes here
}
#Override
protected Class<SNSEvent> getRequestType() {
// strange method needed, but fairly obvious implementation since the generic requires the typing to be correct.
return SNSEvent.class;
}
}
Related:
Converting a Map to Object Java
Converting a Map to Object Scala
Related
I want to do something like this:
interface Serializable<FromType, ToType> {
fun serialize(): ToType
companion object {
abstract fun deserialize(serialized: ToType): FromType
}
}
or even this would work for me:
interface Serializable<ToType> {
fun serialize(): ToType
constructor(serialized: ToType)
}
but neither compiles. Is there a syntax for this, or will I be forced to use make this an interface for a factory? Or is there another answer? 😮 That'd be neat!
Basically, nothing in a companion object can be abstract or open (and thus be overridden), and there's no way to require the implementations' companion objects to have a method or to define/require a constructor in an interface.
A possible solution for you is to separate these two functions into two interfaces:
interface Serializable<ToType> {
fun serialize(): ToType
}
interface Deserializer<FromType, ToType> {
fun deserialize(serialized: ToType): FromType
}
This way, you will be able to implement the first interface in a class and make its companion object implement the other one:
class C: Serializable<String> {
override fun serialize(): String = "..."
companion object : Deserializer<C, String> {
override fun deserialize(serialized: String): C = C()
}
}
Also, there's a severe limitation that only a single generic specialization of a type can be used as a supertype, so this model of serializing through the interface implementation may turn out not scalable enough, not allowing multiple implementations with different ToTypes.
For future uses, it's also possible to give the child class to a function as a receiver parameter:
val encodableClass = EncodableClass("Some Value")
//The encode function is accessed like a member function on an instance
val stringRepresentation = encodableClass.encode()
//The decode function is accessed statically
val decodedClass = EncodableClass.decode(stringRepresentation)
interface Encodable<T> {
fun T.encode(): String
fun decode(stringRepresentation: String): T
}
class EncodableClass(private val someValue: String) {
// This is the remaining awkwardness,
// you have to give the containing class as a Type Parameter
// to its own Companion Object
companion object : Encodable<EncodableClass> {
override fun EncodableClass.encode(): String {
//You can access (private) fields here
return "This is a string representation of the class with value: $someValue"
}
override fun decode(stringRepresentation: String): EncodableClass {
return EncodableClass(stringRepresentation)
}
}
}
//You also have to import the encode function separately:
// import codingProtocol.EncodableClass.Companion.encode
This is the more optimal use case for me. Instead of one function in the instanced object and the other in the companion object like your example, we move both functions to the companion object and extend the instance.
How to call such a scala function?
def f(v: Void): Unit = {println(1)}
I haven't found a value of Void type in Scala yet.
I believe using Void/null in Java is similar to using Unit/() in Scala. Consider this:
abstract class Fun<A> {
abstract public A apply();
}
class IntFun extends Fun<Integer> {
public Integer apply() { return 0; }
}
public static <A> A m(Fun<A> x) { return x.apply(); }
Now that we defined generic method m we also want to use it for classes where apply is only useful for its side effects (i.e. we need to return something that clearly indicates it's useless). void doesn't work as it breaks Fun<A> contract. We need a class with only one value which means "drop return value", and it's Void and null:
class VoidFun extends Fun<Void> {
public Void apply() { /* side effects here */ return null; }
}
So now we can use m with VoidFun.
In Scala usage of null is discouraged and Unit is used instead (it has only one value ()), so I believe the method you mentioned was intended to be called from Java. To be compatible with Java Scala has null which is the only instance of a class Null. Which in turn is a subtype of any reference class, so you can assign null to any reference class variable. So the pattern Void/null works in Scala too.
Void, or more specifically, java.lang.Void, has the following in the documentation:
The Void class is an uninstantiable placeholder class to hold a
reference to the Class object representing the Java keyword void.
In Scala, there's no keyword void, so the Void type is essentially useless in Scala. The closest thing is either a function with no parameters, i.e. def f: Unit = {println(1)} which you can call using f or f(), or the Unit type for functions that don't return anything, as in your example.
Could someone explain why scala would allow a public variable, to satisfy the implementation of an abstract declared Protected item? My first assumption is that the compiler would complain, but I created a small test to see if this worked, and to my surprise it does. Is there an advantage to this? (perhaps this is normal in OOP?) Any methods to avoid the accidental pitfall?
object NameConflict extends App {
abstract class A {
protected[this] var name:String
def speak = println(name)
}
class B(var name:String) extends A { //notice we've declared a public var
}
val t = new B("Tim")
t.speak
println(t.name) // name is exposed now?
}
It's normal and as in Java. Sometimes it's desirable to increase the visibility of a member.
You can't do it the other way around and turn down visibility in a subclass, because the member can by definition be accessed through the supertype.
If invoking a method has terrible consequences, keep the method private and use a template method that can be overridden; the default implementation would invoke the dangerous method.
abstract class A {
private[this] def dangerous = ???
final protected def process: Int = {
dangerous
template
}
protected def template: Int = ???
}
class B extends A {
override def template = 5
}
If function accepts structural type, it can be defined as:
def doTheThings(duck: { def walk; def quack }) { duck.quack }
or
type DuckType = { def walk; def quack }
def doTheThings(duck: DuckType) { duck.quack }
Then, you can use that function in following way:
class Dog {
def walk { println("Dog walk") }
def quack { println("Dog quacks") }
}
def main(args: Array[String]) {
doTheThings(new Dog);
}
If you decompile (to Java) the classes generated by scalac for my example, you can see that argument of doTheThings is of type Object and the implementation uses reflection to call methods on the argument (i.e.duck.quack)
My question is why reflection? Isn't it possible just to use anonymous and invokevirtual instead of reflection?
Here is way to translate(implement) the structural type calls for my example (Java syntax, but the point is the bytecode):
class DuckyDogTest {
interface DuckType {
void walk();
void quack();
}
static void doTheThing(DuckType d) {
d.quack();
}
static class Dog {
public void walk() { System.out.println("Dog walk"); }
public void quack() { System.out.println("Dog quack"); }
}
public static void main(String[] args) {
final Dog d = new Dog();
doTheThing(new DuckType() {
public final void walk() { d.walk(); }
public final void quack() { d.quack();}
});
}
}
Consider a simple proposition:
type T = { def quack(): Unit; def walk(): Unit }
def f(a: T, b: T) =
if (a eq b) println("They are the same duck!")
else println("Different ducks")
f(x, x) // x is a duck
It would print Different ducks under your proposal. You could further refine it, but you just cannot keep referential equality intact using a proxy.
A possible solution would be to use the type class pattern, but that would require passing another parameter (even if implicit). Still, it's faster. But that's mostly because of the lameness of Java's reflection speed. Hopefully, method handles will get around the speed problem. Unfortunately, Scala is not scheduled to give up on Java 5, 6 and 7 (which do not have method handles) for some time...
In addition to your proxy object implementing methods on the structural type, it would also need to have appropriate pass-through implementations of all of the methods on Any (equals, hashCode, toString, isInstanceOf, asInstanceOf) and AnyRef(getClass, wait, notify, notifyAll, and synchronized). While some of these would be straightforward, some would be almost impossible to get right. In particular, all of the methods listed are "final" on AnyRef (for Java compatability and security) and so couldn't be properly implemented by your proxy object.
Assume I want to offer method foo on existing type A outside of my control. As far as I know, the canonical way to do this in Scala is implementing an implicit conversion from A to some type that implements foo. Now I basically see two options.
Define a separate, maybe even hidden class for the purpose:
protected class Fooable(a : A) {
def foo(...) = { ... }
}
implicit def a2fooable(a : A) = new Fooable(a)
Define an anonymous class inline:
implicit def a2fooable(a : A) = new { def foo(...) = { ... } }
Variant 2) is certainly less boilerplate, especially when lots of type parameters happen. On the other hand, I think it should create more overhead since (conceptually) one class per conversion is created, as opposed to one class globally in 1).
Is there a general guideline? Is there no difference, because compiler/VM get rid of the overhead of 2)?
Using a separate class is better for performance, as the alternative uses reflection.
Consider that
new { def foo(...) = { ... } }
is really
new AnyRef { def foo(...) = { ... } }
Now, AnyRef doesn't have a method foo. In Scala, this type is actually AnyRef { def foo(...): ... }, which, if you remove AnyRef, you should recognize as a structural type.
At compile time, this time can be passed back and forth, and everywhere it will be known that the method foo is callable. However, there's no structural type in the JVM, and to add an interface would require a proxy object, which would cause some problems such as breaking referential equality (ie, an object would not be equal with a structural type version of itself).
The way found around that was to use cached reflection calls for structural types.
So, if you want to use the Pimp My Library pattern for any performance-sensitive application, declare a class.
I believe 1 and 2 get compiled to the same bytecode (except for the class name that gets generated in case 2).
If Fooable exists only for you to be able to convert implicitly A to Fooable (and you're never going to directly create and use a Fooable), then I would go with option 2.
However, if you control A (meaning A is not a java library class that you can't subclass) I would consider using a trait instead of implicit conversions to add behaviour to A.
UPDATE:
I have to reconsider my answer. I would use variant 1 of your code, because variant 2 turns out to be using reflection (scala 2.8.1 on Linux).
I compiled these two versions of the same code, decompiled them to java with jd-gui and here are the results:
source code with named class
class NamedClass { def Foo : String = "foo" }
object test {
implicit def StrToFooable(a: String) = new NamedClass
def main(args: Array[String]) { println("bar".Foo) }
}
source code with anonymous class
object test {
implicit def StrToFooable(a: String) = new { def Foo : String = "foo" }
def main(args: Array[String]) { println("bar".Foo) }
}
compiled and decompiled to java with java-gui. The "named" version generates a NamedClass.class that gets decompiled to this java:
public class NamedClass
implements ScalaObject
{
public String Foo()
{
return "foo";
}
}
the anonymous generates a test$$anon$1 class that gets decompiled to the following java
public final class test$$anon$1
{
public String Foo()
{
return "foo";
}
}
so almost identical, except for the anonymous being "final" (they apparently want to make extra sure you won't get out of your way to try and subclass an anonymous class...)
however at the call site I get this java for the "named" version
public void main(String[] args)
{
Predef..MODULE$.println(StrToFooable("bar").Foo());
}
and this for the anonymous
public void main(String[] args) {
Object qual1 = StrToFooable("bar"); Object exceptionResult1 = null;
try {
exceptionResult1 = reflMethod$Method1(qual1.getClass()).invoke(qual1, new Object[0]);
Predef..MODULE$.println((String)exceptionResult1);
return;
} catch (InvocationTargetException localInvocationTargetException) {
throw localInvocationTargetException.getCause();
}
}
I googled a little and found that others have reported the same thing but I haven't found any more insight as to why this is the case.