Calling one GPU kernel from another with NumbaPro - numba

I wish to make a call from one GPU kernel to another:
import numpy
from numbapro import vectorize
sig = 'int16(int16, int16)'
#vectorize([sig], device=True, target='gpu')
def sum(a, b):
return a + b
#vectorize([sig], target='gpu')
def proxy(a, b):
return sum(a, b)
result = proxy(5, 10) # this will fail!
I've added the device=True on the called function, but it doesn't seem to do the trick.
The failing line yields this error: TypingError: Untyped global name 'sum'
What may be wrong?

Related

How to implement memoization in Scala without mutability?

I was recently reading Category Theory for Programmers and in one of the challenges, Bartosz proposed to write a function called memoize which takes a function as an argument and returns the same one with the difference that, the first time this new function is called, it stores the result of the argument and then returns this result each time it is called again.
def memoize[A, B](f: A => B): A => B = ???
The problem is, I can't think of any way to implement this function without resorting to mutability. Moreover, the implementations I have seen uses mutable data structures to accomplish the task.
My question is, is there a purely functional way of accomplishing this? Maybe without mutability or by using some functional trick?
Thanks for reading my question and for any future help. Have a nice day!
is there a purely functional way of accomplishing this?
No. Not in the narrowest sense of pure functions and using the given signature.
TLDR: Use mutable collections, it's okay!
Impurity of g
val g = memoize(f)
// state 1
g(a)
// state 2
What would you expect to happen for the call g(a)?
If g(a) memoizes the result, an (internal) state has to change, so the state is different after the call g(a) than before.
As this could be observed from the outside, the call to g has side effects, which makes your program impure.
From the Book you referenced, 2.5 Pure and Dirty Functions:
[...] functions that
always produce the same result given the same input and
have no side effects
are called pure functions.
Is this really a side effect?
Normally, at least in Scala, internal state changes are not considered side effects.
See the definition in the Scala Book
A pure function is a function that depends only on its declared inputs and its internal algorithm to produce its output. It does not read any other values from “the outside world” — the world outside of the function’s scope — and it does not modify any values in the outside world.
The following examples of lazy computations both change their internal states, but are normally still considered purely functional as they always yield the same result and have no side effects apart from internal state:
lazy val x = 1
// state 1: x is not computed
x
// state 2: x is 1
val ll = LazyList.continually(0)
// state 1: ll = LazyList(<not computed>)
ll(0)
// state 2: ll = LazyList(0, <not computed>)
In your case, the equivalent would be something using a private, mutable Map (as the implementations you may have found) like:
def memoize[A, B](f: A => B): A => B = {
val cache = mutable.Map.empty[A, B]
(a: A) => cache.getOrElseUpdate(a, f(a))
}
Note that the cache is not public.
So, for a pure function f and without looking at memory consumption, timings, reflection or other evil stuff, you won't be able to tell from the outside whether f was called twice or g cached the result of f.
In this sense, side effects are only things like printing output, writing to public variables, files etc.
Thus, this implementation is considered pure (at least in Scala).
Avoiding mutable collections
If you really want to avoid var and mutable collections, you need to change the signature of your memoize method.
This is, because if g cannot change internal state, it won't be able to memoize anything new after it was initialized.
An (inefficient but simple) example would be
def memoizeOneValue[A, B](f: A => B)(a: A): (B, A => B) = {
val b = f(a)
val g = (v: A) => if (v == a) b else f(v)
(b, g)
}
val (b1, g) = memoizeOneValue(f, a1)
val (b2, h) = memoizeOneValue(g, a2)
// ...
The result of f(a1) would be cached in g, but nothing else. Then, you could chain this and always get a new function.
If you are interested in a faster version of that, see #esse's answer, which does the same, but more efficient (using an immutable map, so O(log(n)) instead of the linked list of functions above, O(n)).
Let's try(Note: I have change the return type of memoize to store the cached data):
import scala.language.existentials
type M[A, B] = A => T forSome { type T <: (B, A => T) }
def memoize[A, B](f: A => B): M[A, B] = {
import scala.collection.immutable
def withCache(cache: immutable.Map[A, B]): M[A, B] = a => cache.get(a) match {
case Some(b) => (b, withCache(cache))
case None =>
val b = f(a)
(b, withCache(cache + (a -> b)))
}
withCache(immutable.Map.empty)
}
def f(i: Int): Int = { print(s"Invoke f($i)"); i }
val (i0, m0) = memoize(f)(1) // f only invoked at first time
val (i1, m1) = m0(1)
val (i2, m2) = m1(1)
Yes there is pure functional ways to implement polymorphic function memoization. The topic is surprisingly deep and even summons the Yoneda Lemma, which is likely what Bartosz had in mind with this exercise.
The blog post Memoization in Haskell gives a nice introduction by simplifying the problem a bit: instead of looking at arbitrary functions it restricts the problem to functions from the integers.
The following memoize function takes a function of type Int -> a and
returns a memoized version of the same function. The trick is to turn
a function into a value because, in Haskell, functions are not
memoized but values are. memoize converts a function f :: Int -> a
into an infinite list [a] whose nth element contains the value of f n.
Thus each element of the list is evaluated when it is first accessed
and cached automatically by the Haskell runtime thanks to lazy
evaluation.
memoize :: (Int -> a) -> (Int -> a)
memoize f = (map f [0 ..] !!)
Apparently the approach can be generalised to function of arbitrary domains. The trick is to come up with a way to use the type of the domain as an index into a lazy data structure used for "storing" previous values. And this is where the Yoneda Lemma comes in and my own understanding of the topic becomes flimsy.

Why List.fill method has two group of parameters instead of one?

What is the reason behind List.fill is being defined with two groups of parameters instead of one with n and elem parameters together
Current definition
def fill[A](n: Int)(elem: ⇒ A): CC[A]
Proposed definition
def fill[A](n: Int, elem: ⇒ A): CC[A]
Isn't it unnecessary boilerplate? Or is it designed to use the first part (List.fill(n)) as a curried function constructor?
You can write
List.fill(10){ val r = math.random; r * r }
but you cannot write
List.fill(10, r = math.random; r * r)
and
List.fill(10, {r = math.random; r * r})
looks somewhat awkward.
In this case, it's almost irrelevant, but note that the way how the arguments are grouped into argument lists can influence the type inference quite significantly, e.g.
def map[X, Y](a: F[X])(f: X => Y): F[Y]
works perfectly fine without any type annotations most of the time, whereas
def map[X, Y](a: F[X], f: X => Y): F[Y]
is quite painful to use. Take a careful look at such methods as ap, ap2 or map2 in this piece of code, for example. There is a good reason why the argument lists are the way they are, you would notice it immediately if they were defined differently.

Testing a generated curried function with Scala Test

I'm having hard times trying to create a Scala Test to checks this function:
def curry[A,B,C](f: (A,B) => C): A => (B => C) =
a => b => f(a,b)
The first thought I had was to validate if given a function fx passed into curry(fx) function, will return a curried version of it.
Any tips?
One way to test it, is to pass different f's to it and see if you are getting back the function you expect. For example, you can test an f that returns the arguments as a tuple:
def f(x: String, y: Int) = (x, y)
curry(f)("4")(7) must be(("4", 7))
IMO, testing it for a few different functions f and for a few different a and b would be more than sufficiently assuring that something as trivial as this works as intended.

How does this Scala function work?

def flatMap[A,B](f: Rand[A])(g: A => Rand[B]): Rand[B] =rng => {
val (a, r1) = f(rng)
g(a)(r1)
}
I am confused by g(a)(r1) because g is supposed to take only one argument, so why r1?
I don't exactly know the Rand[A] structure, so this is partially guessing. What you are returning in this function is a Rand[B] and when you implement the function you start of with defining an argument rng of an anonymous function. This tells me that Rand is probably some kind of function itself.
In the second line (val (a, r1) = f(rng)) you apply the value rng to the instance f: Rand[A]. In the resulting tuple, a has type A.
Note with this that f(rng) is equal to explicitly calling apply: f.apply(rng).
You can use the value a to get a Rand[B] by applying this value to the function g. So, val rb: Rand[B] = g(a) (or g.apply(a)).
Now, you don't want to return an instance of Rand[B] in this anonymous function! Instead you need to apply the previous result r1 to this instance rb. So, you get rb(r1) or rb.apply(r1). Substituting rb with g(a) or g.apply(a) gives you g(a)(r1) or g.apply(a).apply(r1).
To summarize, as you said, g is supposed to only take one argument, which results in an instance of Rand[B], but that is not the expected return type here. You need to apply the result of the previous computation to this new computation to get the expected result.

generating permutations with scalacheck

I have some generators like this:
val fooRepr = oneOf(a, b, c, d, e)
val foo = for (s <- choose(1, 5); c <- listOfN(s, fooRepr)) yield c.mkString("$")
This leads to duplicates ... I might get two a's, etc. What I really want is to generate random permutation with exactly 0 or 1 or each of a, b, c, d, or e (with at least one of something), in any order.
I was thinking there must be an easy way, but I'm struggling to even find a hard way. :)
Edited: Ok, this seems to work:
val foo = for (s <- choose(1, 5);
c <- permute(s, a, b, c, d, e)) yield c.mkString("$")
def permute[T](n: Int, gs: Gen[T]*): Gen[Seq[T]] = {
val perm = Random.shuffle(gs.toList)
for {
is <- pick(n, 1 until gs.size)
xs <- sequence[List,T](is.toList.map(perm(_)))
} yield xs
}
...borrowing heavily from Gen.pick.
Thanks for your help, -Eric
Rex, thanks for clarifying exactly what I'm trying to do, and that's useful code, but perhaps not so nice with scalacheck, particularly if the generators in question are quite complex. In my particular case the generators a, b, c, etc. are generating huge strings.
Anyhow, there was a bug in my solution above; what worked for me is below. I put a tiny project demonstrating how to do this at github
The guts of it is below. If there's a better way, I'd love to know it...
package powerset
import org.scalacheck._
import org.scalacheck.Gen._
import org.scalacheck.Gen
import scala.util.Random
object PowersetPermutations extends Properties("PowersetPermutations") {
def a: Gen[String] = value("a")
def b: Gen[String] = value("b")
def c: Gen[String] = value("c")
def d: Gen[String] = value("d")
def e: Gen[String] = value("e")
val foo = for (s <- choose(1, 5);
c <- permute(s, a, b, c, d, e)) yield c.mkString
def permute[T](n: Int, gs: Gen[T]*): Gen[Seq[T]] = {
val perm = Random.shuffle(gs.toList)
for {
is <- pick(n, 0 until gs.size)
xs <- sequence[List, T](is.toList.map(perm(_)))
} yield xs
}
implicit def arbString: Arbitrary[String] = Arbitrary(foo)
property("powerset") = Prop.forAll {
a: String => println(a); true
}
}
Thanks,
Eric
You're not describing a permutation, but the power set (minus the empty set)Edit: you're describing a combination of a power set and a permutation. The power set of an indexed set N is isomorphic to 2^N, so we simply (in Scala alone; maybe you want to alter this for use with ScalaCheck):
def powerSet[X](xs: List[X]) = {
val xis = xs.zipWithIndex
(for (j <- 1 until (1<<xs.length)) yield {
for ((x,i) <- xis if ((j & (1<<i)) != 0)) yield x
}).toList
}
to generate all possible subsets given a set. Of course, explicit generation of power sets is unwise if they original set contains more than a handful of elements. If you don't want to generate all of them, just pass in a random number from 1 until (1<<(xs.length-1)) and run the inner loop. (Switch to Long if there are 33-64 elements, and to BitSet if there are more yet.) You can then permute the result to switch the order around if you wish.
Edit: there's another way to do this if you can generate permutations easily and you can add a dummy argument: make your list one longer, with a Stop token. Then permute and .takeWhile(_ != Stop). Ta-da! Permutations of arbitrary length. (Filter out the zero-length answer if need be.)