BFS Function Using Scala - scala

I'm trying to implement a bread first search function in scala. But I'm stuck with this error :
Cannot resolve overloaded method 'enqueue'
Here is what I have so far:
sealed trait BinaryTree[+A]
case object Empty extends BinaryTree[Nothing]
case class Node[+A](value: A, left: BinaryTree[A], right: BinaryTree[A]) extends BinaryTree[A]
val t = Node(3,Node(4, Node(6,Empty,Node(11,Empty,Empty)),Node(7,Node(12,Empty,Empty),Empty)),Node(5,Node(8,Node(13,Empty,Empty),Empty),Node(9,Empty,Empty)))
def bfs(root: Node[Int]): Unit = {
val q = new mutable.Queue[Node[Int]]()
var node = root
q.enqueue(node)
while (q.nonEmpty) {
node = q.dequeue()
print(node.elem + " ")
if (node.left != null) q.enqueue(node.left)
if (node.right != null) q.enqueue(node.right)
}
}
How can I make this function work?

node.left is BinaryTree, but queue.enqueue expects a Node, so, it can't resolve it.
Also, you should not be using vars, nulls, imperative loops or mutable structures. If you are learning scala anyway, might as well learn to use it the right way.
def bfs(root: [Node[Int]]) = {
#tailrec
def loop(queue: Queue[BinaryTree[Int]]): Unit = queue.dequeueOption match {
case None =>
case Some((Empty, q)) => loop(q)
case Some((node: Node[Int], q)) =>
print(s"${node.value} ")
loop(q.enqueue(node.left).enqueue(node.right)
}
loop(Queue.empty[BinaryTree[Int]].enqueue(root))
println()
}

Related

How to reduce each individual ingredient according to another seq of 'applied' ingredients in a recipe

So I have a soup that is compromised of a sequence of ingredient.
I need to determine what ingredients I still need to apply to the soup, based on a sequence of mixtures I have already added to the soup that I am making.
case class Ingredient(name: String)
case class Mixture(ingredient: Ingredient, amount: Int)
// ingredient required to make soup
val i1 = Ingredient("water")
val i2 = Ingredient("salt")
val i3 = Ingredient("sugar")
val soupRequirements = Seq(Mixture(i1, 100), Mixture(i2, 200), Mixture(i3, 50))
println(soup)
val addedIngrediants = Seq(Mixture(i1, 50), Mixture(i2, 200), Mixture(i3, 40))
def determineWhatsLeft(soupRequirements: Seq[Mixture], addedIncredients: Seq[Mixture]): Seq[Mixture] = ???
Reference: https://scastie.scala-lang.org/X6PIG7zYQOGuZX7zcebgRQ
How exactly can I reduce each mixture by the correct amount in a functional way?
First define way of reducing two Mixtures via infix operator
implicit class ReduceMixtures(a: Mixture) {
def +(b: Mixture): Mixture =
if (a.ingredient == b.ingredient) Mixture(a.ingredient, a.amount - b.amount)
else a
}
Note how if the ingredients do not match we return a unchanged. Now we can implement determineWhatsLeft using foldLeft
def determineWhatsLeft(soupRequirements: Seq[Mixture], addedIncredients: Seq[Mixture]): Seq[Mixture] = {
addedIngredients.foldLeft(soupRequirements) { case (acc, next) => acc.map(_ + next) }
}
Using foldLeft the order does not matter, however if the order of soupRequirements and addedIngredients was always mirrored then we could zip and map like so
def determineWhatsLeft(soupRequirements: Seq[Mixture], addedIncredients: Seq[Mixture]): Seq[Mixture] =
(soupRequirements zip addedIngredients).map { case (a, b) => a + b }
Semigroup seems to be the least powerful abstraction fitting the requirement
import cats.implicits._
import cats.Semigroup
implicit object mixtureSemigroup extends Semigroup[Mixture] {
def combine(a: Mixture, b: Mixture): Mixture =
if (a.ingredient == b.ingredient) Mixture(a.ingredient, a.amount - b.amount)
else a
}
implicit object seqMixtureSemigroup extends Semigroup[Seq[Mixture]] {
def combine(soupRequirements: Seq[Mixture], addedIncredients: Seq[Mixture]): Seq[Mixture] =
addedIngredients.foldLeft(soupRequirements) { case (acc, next) =>
acc.map(_ |+| next)
}
}
soupRequirements |+| addedIngredients

Unpacking tuple directly into class in scala

Scala gives the ability to unpack a tuple into multiple local variables when performing various operations, for example if I have some data
val infos = Array(("Matt", "Awesome"), ("Matt's Brother", "Just OK"))
then instead of doing something ugly like
infos.map{ person_info => person_info._1 + " is " + person_info._2 }
I can choose the much more elegant
infos.map{ case (person, status) => person + " is " + status }
One thing I've often wondered about is how to directly unpack the tuple into, say, the arguments to be used in a class constructor. I'm imagining something like this:
case class PersonInfo(person: String, status: String)
infos.map{ case (p: PersonInfo) => p.person + " is " + p.status }
or even better if PersonInfo has methods:
infos.map{ case (p: PersonInfo) => p.verboseStatus() }
But of course this doesn't work. Apologies if this has already been asked -- I haven't been able to find a direct answer -- is there a way to do this?
I believe you can get to the methods at least in Scala 2.11.x, also, if you haven't heard of it, you should checkout The Neophyte's Guide to Scala Part 1: Extractors.
The whole 16 part series is fantastic, but part 1 deals with case classes, pattern matching and extractors, which is what I think you are after.
Also, I get that java.lang.String complaint in IntelliJ as well, it defaults to that for reasons that are not entirely clear to me, I was able to work around it by explicitly setting the type in the typical "postfix style" i.e. _: String. There must be some way to work around that though.
object Demo {
case class Person(name: String, status: String) {
def verboseStatus() = s"The status of $name is $status"
}
val peeps = Array(("Matt", "Alive"), ("Soraya", "Dead"))
peeps.map {
case p # (_ :String, _ :String) => Person.tupled(p).verboseStatus()
}
}
UPDATE:
So after seeing a few of the other answers, I was curious if there was any performance differences between them. So I set up, what I think might be a reasonable test using an Array of 1,000,000 random string tuples and each implementation is run 100 times:
import scala.util.Random
object Demo extends App {
//Utility Code
def randomTuple(): (String, String) = {
val random = new Random
(random.nextString(5), random.nextString(5))
}
def timer[R](code: => R)(implicit runs: Int): Unit = {
var total = 0L
(1 to runs).foreach { i =>
val t0 = System.currentTimeMillis()
code
val t1 = System.currentTimeMillis()
total += (t1 - t0)
}
println(s"Time to perform code block ${total / runs}ms\n")
}
//Setup
case class Person(name: String, status: String) {
def verboseStatus() = s"$name is $status"
}
object PersonInfoU {
def unapply(x: (String, String)) = Some(Person(x._1, x._2))
}
val infos = Array.fill[(String, String)](1000000)(randomTuple)
//Timer
implicit val runs: Int = 100
println("Using two map operations")
timer {
infos.map(Person.tupled).map(_.verboseStatus)
}
println("Pattern matching and calling tupled")
timer {
infos.map {
case p # (_: String, _: String) => Person.tupled(p).verboseStatus()
}
}
println("Another pattern matching without tupled")
timer {
infos.map {
case (name, status) => Person(name, status).verboseStatus()
}
}
println("Using unapply in a companion object that takes a tuple parameter")
timer {
infos.map { case PersonInfoU(p) => p.name + " is " + p.status }
}
}
/*Results
Using two map operations
Time to perform code block 208ms
Pattern matching and calling tupled
Time to perform code block 130ms
Another pattern matching without tupled
Time to perform code block 130ms
WINNER
Using unapply in a companion object that takes a tuple parameter
Time to perform code block 69ms
*/
Assuming my test is sound, it seems the unapply in a companion-ish object was ~2x faster than the pattern matching, and pattern matching another ~1.5x faster than two maps. Each implementation probably has its use cases/limitations.
I'd appreciate if anyone sees anything glaringly dumb in my testing strategy to let me know about it (and sorry about that var). Thanks!
The extractor for a case class takes an instance of the case class and returns a tuple of its fields. You can write an extractor which does the opposite:
object PersonInfoU {
def unapply(x: (String, String)) = Some(PersonInfo(x._1, x._2))
}
infos.map { case PersonInfoU(p) => p.person + " is " + p.status }
You can use tuppled for case class
val infos = Array(("Matt", "Awesome"), ("Matt's Brother", "Just OK"))
infos.map(PersonInfo.tupled)
scala> infos: Array[(String, String)] = Array((Matt,Awesome), (Matt's Brother,Just OK))
scala> res1: Array[PersonInfo] = Array(PersonInfo(Matt,Awesome), PersonInfo(Matt's Brother,Just OK))
and then you can use PersonInfo how you need
You mean like this (scala 2.11.8):
scala> :paste
// Entering paste mode (ctrl-D to finish)
case class PersonInfo(p: String)
Seq(PersonInfo("foo")) map {
case p# PersonInfo(info) => s"info=$info / ${p.p}"
}
// Exiting paste mode, now interpreting.
defined class PersonInfo
res4: Seq[String] = List(info=foo / foo)
Methods won't be possible by the way.
Several answers can be combined to produce a final, unified approach:
val infos = Array(("Matt", "Awesome"), ("Matt's Brother", "Just OK"))
object Person{
case class Info(name: String, status: String){
def verboseStatus() = name + " is " + status
}
def unapply(x: (String, String)) = Some(Info(x._1, x._2))
}
infos.map{ case Person(p) => p.verboseStatus }
Of course in this small case it's overkill, but for more complex use cases this is the basic skeleton.

Scala primitives as reference types?

Does Scala provide a means of accessing primitives by reference (e.g., on the heap) out of the box? E.g., is there an idiomatic way of making the following code return 1?:
import scala.collection.mutable
val m = new mutable.HashMap[String, Int]
var x = m.getOrElseUpdate("foo", 0)
x += 1
m.get("foo") // The map value should be 1 after the preceding update.
I expect I should be able to use a wrapper class like the following as the map's value type (thus storing pointers to the WrappedInts):
class WrappedInt(var theInt:Int)
...but I'm wondering if I'm missing a language or standard library feature.
You can't do that with primitives or their non-primitives counter parts in Java nor Scala. Don't see any other way but use the WrappedInt.
If your goal is to increment map values by key, than you can use some nicer solutions instead of wrapper.
val key = "foo"
val v = m.put(key, m.getOrElse(key, 0) + 1)
or another approach would be to set a default value 0 for the map:
val m2 = m.withDefault(_ => 0)
val v = m2.put(key, m2(key) + 1)
or add extension method updatedWith
implicit class MapExtensions[K, V](val map: Map[K, V]) extends AnyVal {
def updatedWith(key: K, default: V)(f: V => V) = {
map.put(key, f(map.getOrElse(key, default)))
}
}
val m3 = m.updatedWith("foo", 0) { _ + 1 }

Filtering inside `for` with pattern matching

I am reading a TSV file and using using something like this:
case class Entry(entryType: Int, value: Int)
def filterEntries(): Iterator[Entry] = {
for {
line <- scala.io.Source.fromFile("filename").getLines()
} yield new Entry(line.split("\t").map(x => x.toInt))
}
Now I am both interested in filtering out entries whose entryType are set to 0 and ignoring lines with column count greater or lesser than 2 (that does not match the constructor). I was wondering if there's an idiomatic way to achieve this may be using pattern matching and unapply method in a companion object. The only thing I can think of is using .filter on the resulting iterator.
I will also accept solution not involving for loop but that returns Iterator[Entry]. They solutions must be tolerant to malformed inputs.
This is more state-of-arty:
package object liner {
implicit class R(val sc: StringContext) {
object r {
def unapplySeq(s: String): Option[Seq[String]] = sc.parts.mkString.r unapplySeq s
}
}
}
package liner {
case class Entry(entryType: Int, value: Int)
object I {
def unapply(s: String): Option[Int] = util.Try(s.toInt).toOption
}
object Test extends App {
def lines = List("1 2", "3", "", " 4 5 ", "junk", "0, 100000", "6 7 8")
def entries = lines flatMap {
case r"""\s*${I(i)}(\d+)\s+${I(j)}(\d+)\s*""" if i != 0 => Some(Entry(i, j))
case __________________________________________________ => None
}
Console println entries
}
}
Hopefully, the regex interpolator will make it into the standard distro soon, but this shows how easy it is to rig up. Also hopefully, a scanf-style interpolator will allow easy extraction with case f"$i%d".
I just started using the "elongated wildcard" in patterns to align the arrows.
There is a pupal or maybe larval regex macro:
https://github.com/som-snytt/regextractor
You can create variables in the head of the for-comprehension and then use a guard:
edit: ensure length of array
for {
line <- scala.io.Source.fromFile("filename").getLines()
arr = line.split("\t").map(x => x.toInt)
if arr.size == 2 && arr(0) != 0
} yield new Entry(arr(0), arr(1))
I have solved it using the following code:
import scala.util.{Try, Success}
val lines = List(
"1\t2",
"1\t",
"2",
"hello",
"1\t3"
)
case class Entry(val entryType: Int, val value: Int)
object Entry {
def unapply(line: String) = {
line.split("\t").map(x => Try(x.toInt)) match {
case Array(Success(entryType: Int), Success(value: Int)) => Some(Entry(entryType, value))
case _ =>
println("Malformed line: " + line)
None
}
}
}
for {
line <- lines
entryOption = Entry.unapply(line)
if entryOption.isDefined
} yield entryOption.get
The left hand side of a <- or = in a for-loop may be a fully-fledged pattern. So you may write this:
def filterEntries(): Iterator[Int] = for {
line <- scala.io.Source.fromFile("filename").getLines()
arr = line.split("\t").map(x => x.toInt)
if arr.size == 2
// now you may use pattern matching to extract the array
Array(entryType, value) = arr
if entryType == 0
} yield Entry(entryType, value)
Note that this solution will throw a NumberFormatException if a field is not convertible to an Int. If you do not want that, you'll have to encapsulate x.toInt with a Try and pattern match again.

Selection Sort Generic type implementation

I worked my way implementing a recursive version of selection and quick sort,i am trying to modify the code in a way that it can sort a list of any generic type , i want to assume that the generic type supplied can be converted to Comparable at runtime.
Does anyone have a link ,code or tutorial on how to do this please
I am trying to modify this particular code
'def main (args:Array[String]){
val l = List(2,4,5,6,8)
print(quickSort(l))
}
def quickSort(x:List[Int]):List[Int]={
x match{
case xh::xt =>
{
val (first,pivot,second) = partition(x)
quickSort (first):::(pivot :: quickSort(second))
}
case Nil => {x}
}
}
def partition (x:List[Int])=
{
val pivot =x.head
var first:List[Int]=List ()
var second : List[Int]=List ()
val fun=(i:Int)=> {
if (i<pivot)
first=i::first
else
second=i::second
}
x.tail.foreach(fun)
(first,pivot,second)
}
enter code here
def main (args:Array[String]){
val l = List(2,4,5,6,8)
print(quickSort(l))
}
def quickSort(x:List[Int]):List[Int]={
x match{
case xh::xt =>
{
val (first,pivot,second) = partition(x)
quickSort (first):::(pivot :: quickSort(second))
}
case Nil => {x}
}
}
def partition (x:List[Int])=
{
val pivot =x.head
var first:List[Int]=List ()
var second : List[Int]=List ()
val fun=(i:Int)=> {
if (i<pivot)
first=i::first
else
second=i::second
}
x.tail.foreach(fun)
(first,pivot,second)
} '
Language: SCALA
In Scala, Java Comparator is replaced by Ordering (quite similar but comes with more useful methods). They are implemented for several types (primitives, strings, bigDecimals, etc.) and you can provide your own implementations.
You can then use scala implicit to ask the compiler to pick the correct one for you:
def sort[A]( lst: List[A] )( implicit ord: Ordering[A] ) = {
...
}
If you are using a predefined ordering, just call:
sort( myLst )
and the compiler will infer the second argument. If you want to declare your own ordering, use the keyword implicit in the declaration. For instance:
implicit val fooOrdering = new Ordering[Foo] {
def compare( f1: Foo, f2: Foo ) = {...}
}
and it will be implicitly use if you try to sort a List of Foo.
If you have several implementations for the same type, you can also explicitly pass the correct ordering object:
sort( myFooLst )( fooOrdering )
More info in this post.
For Quicksort, I'll modify an example from the "Scala By Example" book to make it more generic.
class Quicksort[A <% Ordered[A]] {
def sort(a:ArraySeq[A]): ArraySeq[A] =
if (a.length < 2) a
else {
val pivot = a(a.length / 2)
sort (a filter (pivot >)) ++ (a filter (pivot == )) ++
sort (a filter(pivot <))
}
}
Test with Int
scala> val quicksort = new Quicksort[Int]
quicksort: Quicksort[Int] = Quicksort#38ceb62f
scala> val a = ArraySeq(5, 3, 2, 2, 1, 1, 9, 39 ,219)
a: scala.collection.mutable.ArraySeq[Int] = ArraySeq(5, 3, 2, 2, 1, 1, 9, 39, 21
9)
scala> quicksort.sort(a).foreach(n=> (print(n), print (" " )))
1 1 2 2 3 5 9 39 219
Test with a custom class implementing Ordered
scala> case class Meh(x: Int, y:Int) extends Ordered[Meh] {
| def compare(that: Meh) = (x + y).compare(that.x + that.y)
| }
defined class Meh
scala> val q2 = new Quicksort[Meh]
q2: Quicksort[Meh] = Quicksort#7677ce29
scala> val a3 = ArraySeq(Meh(1,1), Meh(12,1), Meh(0,1), Meh(2,2))
a3: scala.collection.mutable.ArraySeq[Meh] = ArraySeq(Meh(1,1), Meh(12,1), Meh(0
,1), Meh(2,2))
scala> q2.sort(a3)
res7: scala.collection.mutable.ArraySeq[Meh] = ArraySeq(Meh(0,1), Meh(1,1), Meh(
2,2), Meh(12,1))
Even though, when coding Scala, I'm used to prefer functional programming style (via combinators or recursion) over imperative style (via variables and iterations), THIS TIME, for this specific problem, old school imperative nested loops result in simpler code for the reader. I don't think falling back to imperative style is a mistake for certain classes of problems (such as sorting algorithms which usually transform the input buffer (like a procedure) rather than resulting to a new sorted one
Here it is my solution:
package bitspoke.algo
import scala.math.Ordered
import scala.collection.mutable.Buffer
abstract class Sorter[T <% Ordered[T]] {
// algorithm provided by subclasses
def sort(buffer : Buffer[T]) : Unit
// check if the buffer is sorted
def sorted(buffer : Buffer[T]) = buffer.isEmpty || buffer.view.zip(buffer.tail).forall { t => t._2 > t._1 }
// swap elements in buffer
def swap(buffer : Buffer[T], i:Int, j:Int) {
val temp = buffer(i)
buffer(i) = buffer(j)
buffer(j) = temp
}
}
class SelectionSorter[T <% Ordered[T]] extends Sorter[T] {
def sort(buffer : Buffer[T]) : Unit = {
for (i <- 0 until buffer.length) {
var min = i
for (j <- i until buffer.length) {
if (buffer(j) < buffer(min))
min = j
}
swap(buffer, i, min)
}
}
}
As you can see, rather than using java.lang.Comparable, I preferred scala.math.Ordered and Scala View Bounds rather than Upper Bounds. That's certainly works thanks to many Scala Implicit Conversions of primitive types to Rich Wrappers.
You can write a client program as follows:
import bitspoke.algo._
import scala.collection.mutable._
val sorter = new SelectionSorter[Int]
val buffer = ArrayBuffer(3, 0, 4, 2, 1)
sorter.sort(buffer)
assert(sorter.sorted(buffer))