Typing a decorator that curries functions - annotations

I came across this interesting snippet on GitHub that curries functions and decided to try to add annotations. So far I have the following.
from typing import cast, Callable, TypeVar, Any, Union
from functools import wraps
U = TypeVar('U')
def curry(f: Callable[..., U]) -> Callable[..., U]:
#wraps(f)
def curry_f(*args: Any, **kwargs: Any) -> Union[Callable[..., U], U]:
if len(args) + len(kwargs) >= f.__code__.co_argcount:
return f(*args, **kwargs)
# do I need another #wraps(f) here if curry_f is already #wrapped?
def curried_f(*args2: Any, **kwargs2: Any) -> Any:
return curry_f(*(args + args2), **{**kwargs, **kwargs2})
return cast(U, curried_f)
return curry_f
#curry
def foo(x: int, y: str) -> str:
return str(x) + ' ' + y
foo(5)
foo(1, 'hello!')
foo(1)('hello!')
However, with the last example, Mypy gives the following.
curry.py:43: error: "str" not callable
I can't seem to come up with a way to mitigate this issue.

Related

Providing the equivalent of a type parameter [T] from inside a Scala 3 macro

I'm, um, a very naive Scala 3 metaprogrammer. Apologies in advance.
I'm trying to canonicalize type names. Calling _.dealias.simplified.show on a TypeRepr does the job just fine on the base type, but it doesn't touch the type parameters. So, I'd like to iterate through the type params and call my canonicalizer on them recursively. After some trial and error and reading great intros by Adam Warsky and Eugene Yokota I've managed to iterate through the type params, but I can't figure out how to make the recursive call.
object Playpen:
import scala.quoted.*
inline def recursiveCanonicalName[T]: String = ${Playpen.recursiveCanonicalNameImpl[T]}
def recursiveCanonicalNameImpl[T](using q : Quotes)( using tt : Type[T]) : Expr[String] =
import quotes.reflect.*
val repr = TypeRepr.of[T]
repr.widenTermRefByName.dealias match
case AppliedType(name, args) =>
Expr(name.dealias.simplified.show + "[" + args.map(a => a.dealias.simplified.show /*(recursiveCanonicalNameImpl(q)(a.asType)*/).mkString(",") + "]")
case _ =>
Expr(repr.dealias.simplified.show)
The current version "works" to canonicalize one level of type params, but without recursion can't go deeper.
# macroplay.Playpen.recursiveCanonicalName[Map[String,String]]
res1: String = "scala.collection.immutable.Map[java.lang.String,java.lang.String]"
# macroplay.Playpen.recursiveCanonicalName[Map[Seq[String],Seq[String]]]
res3: String = "scala.collection.immutable.Map[scala.collection.immutable.Seq[scala.Predef.String],scala.collection.immutable.Seq[scala.Predef.String]]"
Any help (and your patience, scalameta makes me feel dumb) is greatly appreciated!
Try pattern matching by type quotation
args.map(a =>
a.asType match {
case '[a] => recursiveCanonicalNameImpl[a].show.stripPrefix("\"").stripSuffix("\"")
}
)
`tq` equivalent in Scala 3 macros
Explicit type conversion in Scala 3 macros
What Scala 3 syntax can match on a Type and its Type parameters in the context of a macro?
Alternatively you can introduce a recursive helper function for TypeRepr argument rather than static type T
def recursiveCanonicalNameImpl[T](using q: Quotes)(using tt: Type[T]): Expr[String] =
import quotes.reflect.*
def hlp(repr: TypeRepr): Expr[String] =
repr.widenTermRefByName.dealias match
case AppliedType(name, args) =>
Expr(name.dealias.simplified.show + "[" + args.map(a =>
hlp(a).show.stripPrefix("\"").stripSuffix("\"")
).mkString(",") + "]")
case _ =>
Expr(repr.dealias.simplified.show)
val repr = TypeRepr.of[T]
hlp(repr)

Scala Compiler Plugin Rewrite Function Definition As A Tuple: error: not found: value scala.Tuple2

I am writing a compiler plugin to rewrite a function definition as a tuple of the function hash + function body
So the following
def f(a: Int, b: Int) : Int = (a + b)
would turn into
val f = ("some-complex-hash", (a: Int, b: Int) => (a + b))
Let me note that, this is for a research project and will be used to integrate some variant of reversible computations into a subset of the language. I am aware that, on its own, this is a bad idea and will break a lot of things.
The documentation of compiler plug in construction seems to be rather lacking (I did go through the official guide), so I am trying to make progress looking at existing plugins such as the kind-projector
In order to understand how to represent this, I have followed the following process
Reify the expression val expr = reify {....}
Extract the tree val tree = expr.tree
showRaw(tree)
I have done this for a function definition, tuple and a lambda, which I believe should be enough to implement this. I got the following so far:
ValDef(Modifiers(), TermName(dd.name), TypeTree(),
Apply(
Select(Ident("scala.Tuple2"), TermName("apply")),
List(
Literal(Constant(hash)),
Function(
List(dd.vparamss),
dd.rhs
)
)
)
)
Before I even get to this, I am having trouble with expanding to any tuple at all i.e. rewrite any function as ("a", "b") which expands to the following in the REPL
Apply(Select(Ident(scala.Tuple2), TermName("apply")), List(Literal(Constant("a")), Literal(Constant("b"))))
The Problem
If I do Ident(scala.Tuple2) I get the following at compile time
overloaded method value Ident with alternatives:
[error] (sym: FunctionRewriter.this.global.Symbol)FunctionRewriter.this.global.Ident <and>
[error] (name: String)FunctionRewriter.this.global.Ident <and>
[error] FunctionRewriter.this.global.Ident.type
[error] cannot be applied to (Tuple2.type)
[error] Select(Ident(scala.Tuple2), TermName("apply")),
If I do Ident("scala.Tuple2") (notice the string), I get the following when the plug in runs (at "run time")
<test>:2: error: not found: value scala.Tuple2
[error] object Main extends App {
I would appreciate any pointers on how to rewrite as Tuples
The Full Code:
class CompilerPlugin(override val global: Global) extends Plugin {
val name = "provenance-expander"
val components = new FunctionRewriter(global) :: Nil
}
class FunctionRewriter(val global: Global) extends PluginComponent with TypingTransformers {
import global._
override val phaseName = "compiler-plugin-phase"
override val runsAfter = List("parser")
override def newPhase(prev: Phase) = new StdPhase(prev) {
override def apply(unit: CompilationUnit) {
unit.body = new MyTypingTransformer(unit).transform(unit.body)
}
}
class MyTypingTransformer(unit: CompilationUnit) extends TypingTransformer(unit) {
override def transform(tree: Tree) : Tree = tree match {
case dd: DefDef =>
val hash: String = "do some complex recursive hashing"
Apply(
Select(Ident("scala.Tuple2"), TermName("apply")),
List(Literal(Constant("a")), Literal(Constant("b")))
)
case _ => super.transform(tree)
}
}
def newTransformer(unit: CompilationUnit) = new MyTypingTransformer(unit)
}
Thanks to #SethTisue for answering in the comments. I am writing up a answer for anybody who might face a similar issue in the future.
As Seth mentioned, using mkTuple was the right way to go. In order to use it, you need the following import
import global.gen._
In this specific case, as originally speculated in the question, the transformation breaks a lot of things. Mainly transforming the methods injected by object and class definitions i.e. for method dispatch or init, results in malformed structures. The work around is using explicit annotations. So the final DefDef ends up looking like the following:
case dd: DefDef =>
if (dd.mods.hasAnnotationNamed(TypeName(typeOf[policyFunction].typeSymbol.name.toString))) {
val hash: String = md5HashString(dd.rhs.toString())
val tup =
atPos(tree.pos.makeTransparent)(mkTuple(List(Literal(Constant(hash)), Function(dd.vparamss(0), dd.rhs))))
val q = ValDef(Modifiers(), dd.name, TypeTree(), tup)
println(s"Re-written Function: $q")
q
} else {
dd
}

How can I use a macro with refined return type inline?

I have a macro:
def f[T]: Any = macro fImpl[T]
def fImpl[T : context.WeakTypeTag](context: whitebox.Context):
context.Tree =
{
import context.universe._
q"{ (x: ${weakTypeOf[T]}) => x + 1 }"
}
When I use it as
f[Int](1)
I see
Error:(26, 6) Any does not take parameters
f[Int](1)
^
If I split into two statements,
val x = f[Int]
x(1)
there are no errors.
Is there a way that I can use the macro f[Int] as a function, without writing an auxiliary statement?
This is a bug, which is reported in our issue tracker at https://issues.scala-lang.org/browse/SI-7914.

Scala untyped macro in infix position

In response to this question, I've been having a go at implementing a Haskell-style 'where' expression in Scala using the macro-paradise branch. The code is available at scala-where. I can now write something like the following:
val result = where ( f1(1) * f2(2), {
def f1(x : Int) = x + 1
def f2(x : Int) = x + 2
})
However, what I'd really like to do is to be able to call this in infix position:
val result = ( f1(1) * f2(2)) where {
def f1(x : Int) = x + 1
def f2(x : Int) = x + 2
}
Normally, this sort of thing would be easy, but I can't see how to do it with the macro call. The expression (f1(1) * f2(2)) won't type before macro application, so something like building an implicit value class doesn't work. Is there a way to get this kind of syntax otherwise?
Failing this, just having two parameter lists so one could do:
val result = where (f1(1) * f2(2)) {
def f1(x : Int) = x + 1
def f2(x : Int) = x + 2
}
would be nice, but again this seems difficult. Can one call a macro with two parameter lists?
For the first option: I would think you could make the implicit conversion an untyped macro itself, no?
For the second option: You can call a macro with multiple parameter lists, yes. Multiple lists at the call site will translate to multiple lists at the definition site, e.g.:
def myMacro(a: _)(b: _) = macro myMacro_impl
def myMacro_impl(c: Context)(a: c.Tree)(b: c.Tree): c.Tree = { ... }
Would be called as:
myMacro(...)(...)
Answer: as of 2013-03-08 it is not possible to use untyped macros in an infix position. Quoted from Eugene Burmako on the scala-user mailing list:
Currently the argument on the left has to be typechecked first before
any implicit resolution kicks in. The fact that you can write "class
foo(x: _)" is an oversight - the underscore syntax is supposed to be
working only in untyped macros.
For reference, the closest I came to being able to do this was the following:
implicit class HasWhere(val exp : _) {
def where(block : Unit) = macro whereInfix
}
def whereInfix(c : Context)(block : c.Expr[Unit]) = {
import c.universe._
val exp = Select(c.prefix.tree, TermName("exp"))
val Expr(Block((inner, _))) = block
val newinner = inner :+ exp
Block(newinner : _*)
}

Threading extra state through a parser in Scala

I'll give you the tl;dr up front
I'm trying to use the state monad transformer in Scalaz 7 to thread extra state through a parser, and I'm having trouble doing anything useful without writing a lot of t m a -> t m b versions of m a -> m b methods.
An example parsing problem
Suppose I have a string containing nested parentheses with digits inside them:
val input = "((617)((0)(32)))"
I also have a stream of fresh variable names (characters, in this case):
val names = Stream('a' to 'z': _*)
I want to pull a name off the top of the stream and assign it to each parenthetical
expression as I parse it, and then map that name to a string representing the
contents of the parentheses, with the nested parenthetical expressions (if any) replaced by their
names.
To make this more concrete, here's what I'd want the output to look like for the example input above:
val target = Map(
'a' -> "617",
'b' -> "0",
'c' -> "32",
'd' -> "bc",
'e' -> "ad"
)
There may be either a string of digits or arbitrarily many sub-expressions at a given level, but these two kinds of content won't be mixed in a single parenthetical expression.
To keep things simple, we'll assume that the stream of names will never
contain either duplicates or digits, and that it will always contain enough
names for our input.
Using parser combinators with a bit of mutable state
The example above is a slightly simplified version of the parsing problem in
this Stack Overflow question.
I answered that question with
a solution that looked roughly like this:
import scala.util.parsing.combinator._
class ParenParser(names: Iterator[Char]) extends RegexParsers {
def paren: Parser[List[(Char, String)]] = "(" ~> contents <~ ")" ^^ {
case (s, m) => (names.next -> s) :: m
}
def contents: Parser[(String, List[(Char, String)])] =
"\\d+".r ^^ (_ -> Nil) | rep1(paren) ^^ (
ps => ps.map(_.head._1).mkString -> ps.flatten
)
def parse(s: String) = parseAll(paren, s).map(_.toMap)
}
It's not too bad, but I'd prefer to avoid the mutable state.
What I want
Haskell's Parsec library makes
adding user state to a parser trivially easy:
import Control.Applicative ((*>), (<$>), (<*))
import Data.Map (fromList)
import Text.Parsec
paren = do
(s, m) <- char '(' *> contents <* char ')'
h : t <- getState
putState t
return $ (h, s) : m
where
contents
= flip (,) []
<$> many1 digit
<|> (\ps -> (map (fst . head) ps, concat ps))
<$> many1 paren
main = print $
runParser (fromList <$> paren) ['a'..'z'] "example" "((617)((0)(32)))"
This is a fairly straightforward translation of my Scala parser above, but without mutable state.
What I've tried
I'm trying to get as close to the Parsec solution as I can using Scalaz's state monad transformer, so instead of Parser[A] I'm working with StateT[Parser, Stream[Char], A].
I have a "solution" that allows me to write the following:
import scala.util.parsing.combinator._
import scalaz._, Scalaz._
object ParenParser extends ExtraStateParsers[Stream[Char]] with RegexParsers {
protected implicit def monadInstance = parserMonad(this)
def paren: ESP[List[(Char, String)]] =
(lift("(" ) ~> contents <~ lift(")")).flatMap {
case (s, m) => get.flatMap(
names => put(names.tail).map(_ => (names.head -> s) :: m)
)
}
def contents: ESP[(String, List[(Char, String)])] =
lift("\\d+".r ^^ (_ -> Nil)) | rep1(paren).map(
ps => ps.map(_.head._1).mkString -> ps.flatten
)
def parse(s: String, names: Stream[Char]) =
parseAll(paren.eval(names), s).map(_.toMap)
}
This works, and it's not that much less concise than either the mutable state version or the Parsec version.
But my ExtraStateParsers is ugly as sin—I don't want to try your patience more than I already have, so I won't include it here (although here's a link, if you really want it). I've had to write new versions of every Parser and Parsers method I use above
for my ExtraStateParsers and ESP types (rep1, ~>, <~, and |, in case you're counting). If I had needed to use other combinators, I'd have had to write new state transformer-level versions of them as well.
Is there a cleaner way to do this? I'd love to see an example of a Scalaz 7's state monad transformer being used to thread state through a parser, but Scalaz 6 or Haskell examples would also be useful and appreciated.
Probably the most general solution would be to rewrite Scala's parser library to accommodate monadic computations while parsing (like you partly did), but that would be quite a laborious task.
I suggest a solution using ScalaZ's State where each of our result isn't a value of type Parse[X], but a value of type Parse[State[Stream[Char],X]] (aliased as ParserS[X]). So the overall parsed result isn't a value, but a monadic state value, which is then run on some Stream[Char]. This is almost a monad transformer, but we have to do lifting/unlifting manually. It makes the code a bit uglier, as we need to lift values sometimes or use map/flatMap on several places, but I believe it's still reasonable.
import scala.util.parsing.combinator._
import scalaz._
import Scalaz._
import Traverse._
object ParenParser extends RegexParsers with States {
type S[X] = State[Stream[Char],X];
type ParserS[X] = Parser[S[X]];
// Haskell's `return` for States
def toState[S,X](x: X): State[S,X] = gets(_ => x)
// Haskell's `mapM` for State
def mapM[S,X](l: List[State[S,X]]): State[S,List[X]] =
l.traverse[({type L[Y] = State[S,Y]})#L,X](identity _);
// .................................................
// Read the next character from the stream inside the state
// and update the state to the stream's tail.
def next: S[Char] = state(s => (s.tail, s.head));
def paren: ParserS[List[(Char, String)]] =
"(" ~> contents <~ ")" ^^ (_ flatMap {
case (s, m) => next map (v => (v -> s) :: m)
})
def contents: ParserS[(String, List[(Char, String)])] = digits | parens;
def digits: ParserS[(String, List[(Char, String)])] =
"\\d+".r ^^ (_ -> Nil) ^^ (toState _)
def parens: ParserS[(String, List[(Char, String)])] =
rep1(paren) ^^ (mapM _) ^^ (_.map(
ps => ps.map(_.head._1).mkString -> ps.flatten
))
def parse(s: String): ParseResult[S[Map[Char,String]]] =
parseAll(paren, s).map(_.map(_.toMap))
def parse(s: String, names: Stream[Char]): ParseResult[Map[Char,String]] =
parse(s).map(_ ! names);
}
object ParenParserTest extends App {
{
println(ParenParser.parse("((617)((0)(32)))", Stream('a' to 'z': _*)));
}
}
Note: I believe that your approach with StateT[Parser, Stream[Char], _] isn't conceptually correct. The type says that we're constructing a parser given some state (a stream of names). So it would be possible that given different streams we get different parsers. This is not what we want to do. We only want that the result of parsing depends on the names, not the whole parser. In this way Parser[State[Stream[Char],_]] seems to be more appropriate (Haskell's Parsec takes a similar approach, the state/monad is inside the parser).