How to define a recursive Iterable type? - type-hinting

I am trying to define a recursive Iterable type Iterable_of_TOIs:
# from __future__ import annotations
from typing import overload, Optional, Iterable, Union
class ToI:
"""
ToI = Token or Iterable of ToIs
"""
Iterable_of_ToIs = Iterable[Union[ToI, 'Iterable_of_ToIs']]
#overload
def __init__(self, *, token: str) -> None:
...
#overload
def __init__(self, *, iterable: Iterable_of_ToIs) -> None:
...
# actual implementation
def __init__(
self,
token: Optional[str] = None,
iterable: Optional[Iterable_of_ToIs] = None
) -> None:
self.token: Optional[str] = token
self.iterable: Optional[Iterable_of_ToIs] = iterable
But mypy complains
with error: Name 'Iterable_of_ToIs' is not defined or
with error: Cannot resolve name "Iterable_of_ToIs" (possible cyclic definition)
if I move the definition of Iterable_of_ToIs out of the class scope.
What am I doing wrong?

Apparently, I am doing wrong not that much.
Regarding error: Name 'Iterable_of_ToIs' is not defined:
in the section Motivation of PEP 613 (Status Accepted) I found:
Type aliases are declared as top level variable assignments.
Yikes! Couldn't find this rule for Python 3.9 or prior in any other doc, especially not in PEP 484.
But anyway... it explains the error. So, ok, let's move the line to the top level.
Regarding error: Cannot resolve name "Iterable_of_ToIs" (possible cyclic definition):
seems to be not yet supported: https://github.com/python/mypy/issues/731

Related

Implicit object works inline but not when it is imported

I am using avro4s to help with avro serialization and deserialization.
I have a case class that includes Timestamps and need those Timestamps to be converted to nicely formatted strings before I publish the records to Kafka; the default encoder is converting my Timestamps to Longs. I read that I needed to write a decoder and encoder (from the avro4s readme).
Here is my case class:
case class MembershipRecordEvent(id: String,
userHandle: String,
planId: String,
teamId: Option[String] = None,
note: Option[String] = None,
startDate: Timestamp,
endDate: Option[Timestamp] = None,
eventName: Option[String] = None,
eventDate: Timestamp)
I have written the following encoder:
Test.scala
def test() = {
implicit object MembershipRecordEventEncoder extends Encoder[MembershipRecordEvent] {
override def encode(t: MembershipRecordEvent, schema: Schema) = {
val dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss")
val record = new GenericData.Record(schema)
record.put("id", t.id)
record.put("userHandle", t.userHandle)
record.put("teamId", t.teamId.orNull)
record.put("note", t.note.orNull)
record.put("startDate", dateFormat.format(t.startDate))
record.put("endDate", if(t.endDate.isDefined) dateFormat.format(t.endDate.get) else null)
record.put("eventName", t.eventName.orNull)
record.put("eventDate", dateFormat.format(t.eventDate))
record
}
}
val recordInAvro2 = Encoder[MembershipRecordEvent].encode(testRecord, AvroSchema[MembershipRecordEvent]).asInstanceOf[GenericRecord]
println(recordInAvro2)
}
If I declare the my implicit object in line, like I did above, it creates the GenericRecord that I am looking for just fine. I tried to abstract the implicit object to a file, wrapped in an object, and I import Implicits._ to use my custom encoder.
Implicits.scala
object Implicits {
implicit object MembershipRecordEventEncoder extends Encoder[MembershipRecordEvent] {
override def encode(t: MembershipRecordEvent, schema: Schema) = {
val dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss")
val record = new GenericData.Record(schema)
record.put("id", t.id)
record.put("userHandle", t.userHandle)
record.put("teamId", t.teamId.orNull)
record.put("note", t.note.orNull)
record.put("startDate", dateFormat.format(t.startDate))
record.put("endDate", if(t.endDate.isDefined) dateFormat.format(t.endDate.get) else null)
record.put("eventName", t.eventName.orNull)
record.put("eventDate", dateFormat.format(t.eventDate))
record
}
}
}
Test.scala
import Implicits._
val recordInAvro2 = Encoder[MembershipRecordEvent].encode(testRecord, AvroSchema[MembershipRecordEvent]).asInstanceOf[GenericRecord]
println(recordInAvro2)
It fails to use my encoder (doesn't hit my breakpoints). I have tried a myriad of things to try and see why it fails to no avail.
How can I correctly import an implicit object?
Is there a simpler solution to encode my case class's Timestamps to Strings without writing an encoder for the entire case class?
TL;DR
As suggested in one of the comments above, you can place it in the companion object.
The longer version:
Probably you have another encoder, that is used instead of the encoder you defined in Implicits.
I'll quote some phrases from WHERE DOES SCALA LOOK FOR IMPLICITS?
When a value of a certain name is required, lexical scope is searched for a value with that name. Similarly, when an implicit value of a certain type is required, lexical scope is searched for a value with that type.
Any such value which can be referenced with its “simple” name, without selecting from another value using dotted syntax, is an eligible implicit value.
There may be more than one such value because they have different names.
In that case, overload resolution is used to pick one of them. The algorithm for overload resolution is the same used to choose the reference for a given name, when more than one term in scope has that name. For example, println is overloaded, and each overload takes a different parameter type. An invocation of println requires selecting the correct overloaded method.
In implicit search, overload resolution chooses a value among more than one that have the same required type. Usually this entails selecting a narrower type or a value defined in a subclass relative to other eligible values.
The rule that the value must be accessible using its simple name means that the normal rules for name binding apply.
In summary, a definition for x shadows a definition in an enclosing scope. But a binding for x can also be introduced by local imports. Imported symbols can’t override definitions of the same name in an enclosing scope. Similarly, wildcard imports can’t override an import of a specific name, and names in the current package that are visible from other source files can’t override imports or local definitions.
These are the normal rules for deciding what x means in a given context, and also determine which value x is accessible by its simple name and is eligible as an implicit.
This means that an implicit in scope can be disabled by shadowing it with a term of the same name.
Now I'll state the companion object logic:
Implicit syntax can avoid the import tax, which of course is a “sin tax,” by leveraging “implicit scope”, which depends on the type of the implicit instead of imports in lexical scope.
When an implicit of type T is required, implicit scope includes the companion object T:
When an F[T] is required, implicit scope includes both the companion of F and the companion of the type argument, e.g., object C for F[C].
In addition, implicit scope includes the companions of the base classes of F and C, including package objects, such as p for p.F.

Decoding Case Class w/ Tagged Type

Given:
Given the following on Ammonite:
# import $ivy.`io.circe::circe-core:0.9.0`
# import $ivy.`io.circe::circe-generic:0.9.0`
# import $ivy.`com.chuusai::shapeless:2.3.3`
# import shapeless.tag
import shapeless.tag
# trait Foo
defined trait Foo
# import io.circe._, io.circe.generic.semiauto._
import io.circe._, io.circe.generic.semiauto._
# import shapeless.tag.##
import shapeless.tag.##
# implicit def taggedTypeDecoder[A, B](implicit ev: Decoder[A]): Decoder[A ## B] =
ev.map(tag[B][A](_))
defined function taggedTypeDecoder
Given a Foo:
# case class F(x: String ## Foo)
defined class F
I can summon an Decoder[String ## Foo]:
# Decoder[String ## Foo]
res17: Decoder[String ## Foo] = io.circe.Decoder$$anon$21#16b32e49
But not a F:
# deriveDecoder[F]
cmd18.sc:1: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[ammonite.$sess.cmd16.F]
val res18 = deriveDecoder[F]
^
Compilation Failed
How can I get a Decoder[F]?
This is a bug in shapeless' Lazy - milessabin/shapeless#309
I have a PR that makes your example compile - milessabin/shapeless#797 (I checked with publishLocal)
Basically the problem in Lazy is that it expands type aliases too eagerly (A ## B is a type alias for A with Tagged[B]) which in turn triggers a Scala bug - scala/bug#10506
The Scala bug doesn't have a clear solution in sight. It's another incarnation of the subtyping vs parametric polymorphism problem that complicates type inference. The gist of it is that Scala has to perform subtype checking and type inference at the same time. But when we put some type variables like A and B in a refined type like A with Tagged[B] (actually circe ends up looking for a FieldType[K, A with Tagged[B]] where FieldType is yet another type alias hiding a refined type), subtyping has to be checked for each component individually. This means that the order in which we choose to check the components determines how the type variables A and B will be constrained. In some cases they end up over- or under-constrained and cannot be inferred correctly.
Apropo, the shapeless tests show a workaround, but I don't think it applies to circe, because it's using some kind of macro rather than doing vanilla typeclass derivation.
Long story short you can:
Wait for a shapeless (please upvote #797) and subsequent circe release
Not use tagged types =/
Try to use a different encoding without refined or structural types - maybe alexknvl/newtypes? (I haven't tried)

Decoding Shapeless Tagged Types

Given the following on Ammonite:
# import $ivy.`io.circe::circe-core:0.9.0`
# import $ivy.`io.circe::circe-generic:0.9.0`
# import $ivy.`com.chuusai::shapeless:2.3.3`
# import shapeless.tag
import shapeless.tag
# trait Foo
defined trait Foo
# import io.circe._, io.circe.generic.semiauto._
import io.circe._, io.circe.generic.semiauto._
# import shapeless.tag.##
import shapeless.tag.##
Then, I attempted to define a generic tagged type decoder:
# implicit def taggedTypeDecoder[A, B](implicit ev: Decoder[A]): Decoder[A ## B] =
ev.map(tag[B][A](_))
defined function taggedTypeDecoder
It works when explicitly spelling out String ## Foo:
# val x: String ## Foo = tag[Foo][String]("foo")
x: String ## Foo = "foo"
# implicitly[Decoder[String ## Foo]]
res10: Decoder[String ## Foo] = io.circe.Decoder$$anon$21#2b17bb37
But, when defining a type alias:
# type FooTypeAlias = String ## Foo
defined type FooTypeAlias
It's not compiling:
# implicitly[Decoder[FooTypeAlias]]
cmd12.sc:1: diverging implicit expansion for type io.circe.Decoder[ammonite.$sess.cmd11.FooTypeAlias]
starting with method decodeTraversable in object Decoder
val res12 = implicitly[Decoder[FooTypeAlias]]
^
Compilation Failed
Why is that? Is there a known "fix?"
Lucky you, to hit two compiler bugs in the same day. This one is scala/bug#8740. The good? news is that there is a partial fix waiting around in this comment for someone to step up and make a PR (maybe this is you). I believe it's partial because it looks like it will work for a specific tag but not for a generic one (I'm not 100% sure).
The reason why you see a diverging implicit expansion is really funny. The compiler can either expand all aliases in one step (essentially going from FooTypeAlias |= String with Tagged[Foo]) or not expand anything. So when it compares String ## Foo and A ## B it doesn't expand, because they match as is. But when it compares FooTypeAlias and A ## B it expands both fully and it ends up in a situation where it has to compare refined types one of which contains type variables (see my answer to your other related question). Here our carefully crafted abstractions break down again and the order of constraints starts to matter. You as the programmer, looking at A with Tagged[B] <:< String with Tagged[Foo] know that the best match is A =:= String and B =:= Foo. However Scala will first compare A <:< String and A <:< Tagged[Foo] and it concludes that A <:< Tagged[Foo] with String (yes, in reverse) which leaves Nothing for B. But wait, we need an implicit Decoder[A]! - which sends us in a loop. So A got over-constrained and B got under-constrained.
Edit: It seems to work if we make ## abstract in order to prevent the compiler from dealiasing: milessabin/shapeless#807. But now it boxes and I can't make arrays work.

Scala - how to go resolve "Value is not a member of Nothing" error

This example code is based on Atmosphere classes, but if someone could give me some insights into what the error means in general, I think I can figure out any Atmosphere-specific solution...
val bc = BroadcasterFactory.getDefault().lookup(_broadcasterId)
bc.broadcast(message)
After the first line, bc should contain a handle to an object whose class definition includes the method broadcast() -- in fact, it contains several overloaded variations. However, the compiler chokes on the second line of code with the following: "value broadcast is not a member of Nothing"
Any ideas/suggestions on what would be causing this?
Thanks.
EDIT: signature for [BroadcasterFactor].lookup :
abstract Broadcaster lookup(Object id)
Note: 1) that is the signature version that I've used in the example, 2) it is the java Inteface signature - whereas the getDefault() hands back an instantiated object that implements that interface.
Solution: force type cast on value:
val bc: Broadcaster = BroadcasterFactory.getDefault().lookup(_broadcasterId)
Nothing is the type name. It's the subtype of all other types. You can't call methods from Nothing itself, you have to specify exact type ((bc: ExactType).broadcast(message)). Nothing has no instances. Method, that returns Nothing will, actually, never return value. It will throw an exception eventually.
Type inference
Definition of lookup:
abstract public <T extends Broadcaster> T lookup(Object id);
in scala this definition looks this way:
def lookup[T <: Broadcaster](Object id): T
There is not specified type parameter in lookup method. In this case compiler will infer this type parameter as the most specific type - Nothing:
scala> def test[T](i: Int): T = ???
test: [T](i: Int)T
scala> lazy val x = test(1)
x: Nothing = <lazy>
scala> lazy val x = test[String](1)
x: String = <lazy>
You could specify type parameter like this:
val bc = BroadcasterFactory.getDefault().lookup[Broadcaster](_broadcasterId)
Draft implementation
In development process lookup can be "implemented" like this:
def lookup(...) = ???
??? returns Nothing.
You should specify either result type of lookup method like this: def lookup(...): <TypeHere> = ... or type of bc: val bc: <TypeHere> =.

Import specific method signature in Scala

Is there a manner to import a specific method signature?
def test() {
lazy val log = LoggerFactory.getLogger("AndroidProxy")
import log.{error, debug, info, trace}
trace("This is a test")
trace "This is also" // <- This line will not compile
}
Perhaps it's not possible, but my primary goal is to allow this without adding in a new method. I've tried these to no avail
import log.{error => error(_:String)}
import log.{error(x: String) => error(x)}
I suppose the primary challenge is that all of the methods take one argument. I can call no-argument methods without (), and I can create a chain of method calls that omits the .'s e.g. foo getX toString, but I don't know how to create an arity-1 call automatically
This is a followup to this question.
The issue with the code:
trace "This is also" // <- This line will not compile
is not that you're somehow importing too many overloaded variants of trace - it's that you can't use Scala's infix notation this way. An expression like:
e op
is interpreted as a "Postfix Operation" (see section 6.12.2 of the Scala Language Specification), equivalent to the call:
e.op
So your code would be equivalent to:
trace."This is also"
which is of course a compile error.
If you instead use an "Infix Operation" of the form e1 op e2 (section 6.12.3 of the Scala Language Specification), then there aren't any problems even if the method is overloaded:
scala> class Logger { def trace(s: String) = "1arg"; def trace(i: Int, s: String) = "2arg" }
defined class Logger
scala> val log = new Logger
log: Logger = Logger#63ecceb3
scala> log trace "This is also"
res0: String = 1arg
No, there is not a way to import a specific method signature.