Adding xavier initiliazation in pytorch - neural-network

I want to add Xavier initialization to the first layer of my Neural Network, but I am getting an error in this class:
class DemoNN(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2,2),
torch.nn.init.xavier_uniform((nn.Linear(2,2)).weights),
nn.Sigmoid(),
nn.Linear(2,2),
nn.Sigmoid(),
nn.Linear(2,4),
nn.Softmax()
)
def forward(self, X):
self.net(X)

You seem to try and initialize the second linear layer within the constructor of an nn.Sequential object.
What you need to do is to first construct self.net and only then initialize the second linear layer as you wish.
Here is how you should do it:
import torch
import torch.nn as nn
class DemoNN(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2,2),
nn.Linear(2,2),
nn.Sigmoid(),
nn.Linear(2,2),
nn.Sigmoid(),
nn.Linear(2,4),
nn.Softmax()
)
torch.nn.init.xavier_uniform_(self.net[1].weight)
def forward(self, X):
self.net(X)

Related

Shorthand for map traverse reduce

I'm looking for a short way to traverse and reduce at the same time. Here is the solution I came up with:
import cats.{Applicative, Monad}
import cats.instances.list._
import cats.syntax.functor._
import cats.syntax.flatMap._
import cats.syntax.traverse._
object Test extends App {
def intFM[F[_], M[_]](i: Int): F[M[Int]] = ???
def traverseReduce[F[_]: Applicative, M[_]: Monad](lst: List[Int]) =
lst.traverse(intFM[F, M]).map(_.reduce(_ >> _))
}
As can be seen I did 3 operations: traverse, map and reduce, but I expected that it was possible to do the reduction while traversing. Is there some shorthand?
You can try without traversing
def traverseReduce[F[_]: Applicative, M[_]: Monad](lst: List[Int]): F[M[Int]] =
lst.map(intFM[F, M]).reduce((_, _).mapN(_ >> _))

FS2: How to get a java.io.InputStream from a fs2.Stream?

Say I have val fs2Stream: Stream[IO, Byte] and I need to, for example, call some Java library that requires a java.io.InputStream.
I suppose that I'm way too new to FS2, but I cannot seem to find the answer. I've tried to use fs2.io.toInputStream and fs2.io.readInputStream but I cannot figure out how to provide some of the required parameters. I've scoured the almighty Google for answers, but it seems that the API has changed since most people were last looking for an answer.
How can I go about doing something like the following?
def myFunc(data: fs2.Stream[IO, Byte]): InputStream[Byte] = someMagicalFunction(data)
You probably want something like this:
import cats.effect.{ContextShift, IO, Resource}
import java.io.InputStream
def myFunc(data: fs2.Stream[IO, Byte])
(implicit cs: ContextShift[IO]): Resource[IO, InputStream] =
data.through(fs2.io.toInputStream).compile.resource.lastOrError
Then you can use it like:
object JavaApi {
def foo(is: InputStream): IO[Unit] = ???
}
object Main extends IOApp {
def data: fs2.Stream[IO, Byte] = ???
override def run(args: List[String]): IO[ExitCode] =
myFunc(data).use(JavaApi.foo).as(ExitCode.Success)
}
Here is an Scastie with the code running.

How to express type bound that forces the type param to behave like functor

I'm struggling to write a method that operates on either Free Monad to Tagles Final. I would like to use type class to pass an interpreter that produces a Monad. In the same function I would like to use map function.
I don't know how to express type bound so that the M type is a functor or monad.
import cats._
import cats.free.Free
def eval[M[_]](param: String)(implicit op: Algebra ~> M): M[String] {
val program: Free[Algebra, String] = Free.liftF(Operation(param))
Free.foldMap(op)
}
def process[M[_]](param: String)(implicit op: Algebra ~> M): M[String] {
val result = eval(param)
result.map(_.toUpper) //this doesn't compile because M is missing map method
}
Try
import cats.{Monad, ~>}
import cats.free.Free
import cats.syntax.functor._
import scala.language.higherKinds
trait Algebra[_]
case class Operation(str: String) extends Algebra[String]
def eval[M[_]: Monad](param: String)(implicit op: Algebra ~> M): M[String] = {
val program: Free[Algebra, String] = Free.liftF (Operation (param) )
Free.foldMap(op).apply(program)
}
def process[M[_]: Monad](param: String)(implicit op: Algebra ~> M): M[String] = {
val result = eval(param)
result.map(_.toUpperCase)
}

spark-shell cannot find the class to be extended

Why cannot I load the file with the following code in spark-shell
import org.apache.spark.sql.types._
import org.apache.spark.sql.Encoder import org.apache.spark.sql.Encoders
import org.apache.spark.sql.expressions.Aggregator
case class Data(i: Int)
val customSummer = new Aggregator[Data, Int, Int] {
def zero: Int = 0
def reduce(b: Int, a: Data): Int = b + a.i
def merge(b1: Int, b2: Int): Int = b1 + b2
def finish(r: Int): Int = r
}.toColumn()
The error:
<console>:47: error: object creation impossible, since:
it has 2 unimplemented members.
/** As seen from <$anon: org.apache.spark.sql.expressions.Aggregator[Data,Int,Int]>, the missing signatures are as follows.
* For convenience, these are usable as stub implementations.
*/
def bufferEncoder: org.apache.spark.sql.Encoder[Int] = ???
def outputEncoder: org.apache.spark.sql.Encoder[Int] = ???
val customSummer = new Aggregator[Data, Int, Int] {
Update: #user8371915's solution works. But the following script cannot be loaded with a different error. I used :load script.sc in the spark-shell.
import org.apache.spark.sql.expressions.Aggregator
class MyClass extends Aggregator
Error:
loading ./script.sc...
import org.apache.spark.sql.expressions.Aggregator
<console>:11: error: not found: type Aggregator
class MyClass extends Aggregator
Update(2017-12-03): it doesn't seem to work within Zeppelin, either.
As per error message you didn't implement bufferEncoder and outputEncoder. Please check API docs for the list of abstract methods that have to be implemented.
These two should suffice:
def bufferEncoder: Encoder[Int] = Encoders.scalaInt
def outputEncoder: Encoder[Int] = Encoders.scalaInt

Is it possible to write typeclass with different implementations?

This is a follow-up to my previous question
Suppose I have a trait ConverterTo and two implementations:
trait ConverterTo[T] {
def convert(s: String): Option[T]
}
object Converters1 {
implicit val toInt: ConverterTo[Int] = ???
}
object Converters2 {
implicit val toInt: ConverterTo[Int] = ???
}
I have also two classes A1 and A2
class A1 {
def foo[T](s: String)(implicit ct: ConverterTo[T]) = ct.convert(s)
}
class A2 {
def bar[T](s: String)(implicit ct: ConverterTo[T]) = ct.convert(s)
}
Now I would like any foo[T] call to use Converters1 and any bar[T] call to use Converters2 without importing Converters1 and Converters2 in the client code.
val a1 = new A1()
val a2 = new A2()
...
val i = a1.foo[Int]("0") // use Converters1 without importing it
...
val j = a2.bar[Int]("0") // use Converters2 without importing it
Can it be done in Scala ?
Import Converters in the class.
class A1 {
import Converters1._
private def fooPrivate[T](s: String)(implicit ct: ConverterTo[T]) = ct.convert(s)
def fooShownToClient[T](s: String) = fooPrivate(s)
}
Then use the method, that is shown to client
val a1 = new A1()
a1.fooShownToClient[Int]("0")
Now the client is unaware of the convertors.
If you have a situation where you need more local control; You can just opt to pass the implicit parameters explicitly:
val i = a1.foo("0")(Converters1.toInt)
val j = a2.foo("0")(Converters2.toInt)
It really depends on what you want. If you want to select a particular implementation without polluting local scope, do it like this (or introduce a new scope). mohit's solution works well if the classes need a particular implementation (although in that case, there's no real point in declaring this dependency as implicit anymore).