I am trying to achieve the calculation of area of polygon using scala
Reference for Area of Polygon https://www.mathopenref.com/coordpolygonarea.html
I was able to write the code successfully but my requirement is I want the last for loop to be chained within points variable like other map logics. Map is not working because output of list is (N-1)
var lines=io.Source.stdin.getLines()
val nPoints= lines.next.toInt
var s=0
var points = lines.take(nPoints).toList map(_.split(" ")) map{case Array(e1,e2)=>(e1.toInt,e2.toInt)}
for(i <- 0 until points.length-1){ // want this for loop to be chained within points variable
s=s+(points(i)._1 * points(i+1)._2) - (points(i)._2 * points(i+1)._1)
}
println(scala.math.abs(s/2.0))
You're looking for a sliding and a foldLeft, I think. sliding(2) will give you lists List(i, i+1), List(i+1, i+2),... and then you can use foldLeft to do the computation:
val s = points.sliding(2).foldLeft(0){
case (acc, (p1, p2)::(q1, q2)::Nil) => acc + p1*q2 - p2*q1 }
If I read the formula correctly, there's one additional term coming from the first and last vertices (which looks like it's missing from your implementation, by the way). You could either add that separately or repeat the first vertex at the end of your list. As a result, combining into the same line that defines points doesn't work so easily. In any case, I think it is more readable split into separate statements--one defining points, another repeating the first element of points at the end of the list and then the fold.
Edited to fix the fact that you have tuples for your points, not lists. I'm also tacitly assuming points is a list, I believe.
how about something like (untested)
val points2 = points.tail :+ points.head // shift list one to the left
val area = ((points zip points2)
.map{case (p1, p2) => (p1._1 * p2._2) - (p2._1 * p1._2)}
.fold(0)(_+_))/2
Related
val keywords = List("do", "abstract","if")
val resMap = io.Source
.fromFile("src/demo/keyWord.txt")
.getLines()
.zipWithIndex
.foldLeft(Map.empty[String,Seq[Int]].withDefaultValue(Seq.empty[Int])){
case (m, (line, idx)) =>
val subMap = line.split("\\W+")
.toSeq //separate the words
.filter(keywords.contains) //keep only key words
.groupBy(identity) //make a Map w/ keyword as key
.mapValues(_.map(_ => idx+1)) //and List of line numbers as value
.withDefaultValue(Seq.empty[Int])
keywords.map(kw => (kw, m(kw) ++ subMap(kw))).toMap
}
println("keyword\t\tlines\t\tcount")
keywords.sorted.foreach{kw =>
println(kw + "\t\t" +
resMap(kw).distinct.mkString("[",",","]") + "\t\t" +
resMap(kw).length)
}
This code is not mine and i don't own it ... .using for study purpose. However, I am still learning and I am stuck at implement consecutive to nonconsecutive list, such as the word "if" is in many line and when three or more consecutive line numbers appear then they should be written with a dash in between, e.g. 20-22, but not 20, 21, 22. How can I implement? I just wanted to learn this.
output:
keyword lines count
abstract [1] 1
do [6] 1
if [14,15,16,17,18] 5
But I want the result to be such as [14-18] because word "if" is in line 14 to 18.
First off, I'll give the customary caution that SO isn't meant to be a place to crowdsource answers to homework or projects. I'll give you the benefit of the doubt that this isn't the case.
That said, I hope you gain some understanding about breaking down this problem from this suggestion:
your existing implementation has nothing in place to understand if the int values are indeed consecutive, so you are going to need to add some code that sorts the Ints returned from resMap(kw).distinct in order to set yourself up for the next steps. You can figure out how to do this.
you will then need to group the Ints by their consecutive nature. For example, if you have (14,15,16,18,19,20,22) then this really needs to be further grouped into ((14,15,16),(18,19,20),(22)). You can come up with your algorithm for this.
map over the outer collection (which is a Seq[Seq[Int]] at this point), having different handling depending on whether or not the length of the inside Seq is greater than 1. If greater than one, you can safely call head and tail to get the Ints that you need for rendering your range. Alternatively, you can more idiomatically make a for-comprehension that composes the values from headOption and tailOption to build the same range string. You said something about length of 3 in your question, so you can adjust this step to meet that need as necessary.
lastly, now you have Seq[String] looking like ("14-16","18-20","22") that you need to join together using a mkString call similar to what you already have with the square brackets
For reference, you should get further acquainted with the Scaladoc for the Seq trait:
https://www.scala-lang.org/api/2.12.8/scala/collection/Seq.html
Here's one way to go about it.
def collapseConsecutives(nums :Seq[Int]) :List[String] =
nums.foldRight((nums.last, List.empty[List[Int]])) {
case (n, (prev,acc)) if prev-n == 1 => (n, (n::acc.head) :: acc.tail)
case (n, ( _ ,acc)) => (n, List(n) :: acc)
}._2.map{ ns =>
if (ns.length < 3) ns.mkString(",") //1 or 2 non-collapsables
else s"${ns.head}-${ns.last}" //3 or more, collapsed
}
usage:
println(kw + "\t\t" +
collapseConsecutives(resMap(kw).distinct).mkString("[",",","]") + "\t\t" +
resMap(kw).length)
val dimensionality = 10
val zeros = DenseVector.zeros[Double](dimensionality)
#tailrec private def specials(list: List[DenseVector[Int]], i: Int): List[DenseVector[Int]] = {
if(i >= dimensionality) list
else {
val vec = zeros.copy
vec(i to i) := 1
specials(vec :: list, i + 1)
}
}
val specialList = specials(Nil, 0).toVector
specialList.map(...doing my thing...)
Should I write my tail recursive function using a List as accumulator above and then write
specials(Nil, 0).toVector
or should I write my trail recursion with a Vector in the first place? What is computationally more efficient?
By the way: specialList is a list that contains DenseVectors where every entry is 0 with the exception of one entry, which is 1. There are as many DenseVectors as they are long.
I'm not sur what you're trying to do here but you could rewrite your code like so:
type Mat = List[Vector[Int]]
#tailrec
private def specials(mat: Mat, i: Int): Mat = i match {
case `dimensionality` => mat
case _ =>
val v = zeros.copy.updated(i,1)
specials(v :: mat, i + 1)
}
As you are dealing with a matrix, Vector is probably a better choice.
Let's compare the performance characteristics of both variants:
List: prepending takes constant time, conversion to Vector takes linear time.
Vector: prepending takes "effectively" constant time (eC), no subsequent conversion needed.
If you compare the implementations of List and Vector, then you'll find out that prepending to a List is a simpler and cheaper operation than prepending to a Vector. Instead of just adding another element at the front as it is done by List, Vector potentially has to replace a whole branch/subtree internally. On average, this still happens in constant time ("effectively" constant, because the subtrees can differ in their size), but is more expensive than prepending to List. On the plus side, you can avoid the call to toVector.
Eventually, the crucial point of interest is the size of the collection you want to create (or in other words, the amount of recursive prepend-steps you are doing). It's totally possible that there is no clear winner and one of the two variants is faster for <= n steps, whereas the other variant is faster for > n steps. In my naive toy benchmark, List/toVecor seemed to be faster for less than 8k elements, but you should perform a set of well-chosen benchmarks that represent your scenario adequately.
I am trying to find the indices of elements in one Scala list which are not present in a second list (assume the second list has distinct elements so that you do not need to invoke toSet on it). The best way I found is:
val ls = List("a", "b", "c") // the list
val excl = List("c", "d") // the list of items to exclude
val ixs = ls.zipWithIndex.
filterNot{p => excl.contains(p._1)}.
map{ p => p._2} // the list of indices
However, I feel there should be a more direct method. Any hints?
Seems OK to me. It's a bit more elegant as a for-comprehension, perhaps:
for ((e,i) <- ls.zipWithIndex if !excl.contains(e)) yield i
And for efficiency, you might want to make excl into a Set anyway
val exclSet = excl.toSet
for ((e,i) <- ls.zipWithIndex if !exclSet(e)) yield i
One idea would be this
(ls.zipWithIndex.toMap -- excl).values
only works however, if you are not interested in all position if an element occurs multiple times in the list. That would need a MultiMap which Scala does not have in the standard library.
An other version would be to use a partial function and convert the second list to a set first (unless it is really small lookup in a set will be much fast)
val set = excl.toSet
ls.zipWithIndex.collect{case (x,y) if !set(x) => y}
Suppose we have a list of some int values(positive and negative) and we have a task to double only positive values. Here is a snippet that produces the desired result:
val list: List[Int] = ....
list.filter(_ > 0).map(_ * 2)
So far so good, but what if the list is very large of size N. Does the program iterates N times on filter function and then N times on map function?
How does scala know when it's time to go through the list in cycle and apply all the filtering and mapping stuff? What will be result(in terms of list traversing) if we group the original list by identity function for instance(to get rid of duplicates) and the apply map function?
Does the program iterates N times on filter function and then N times on map function?
Yes. Use a view to make operations on collections lazy. For example:
val list: List[Int] = ...
list.view.filter(_ > 0).map(_ * 2)
How does scala know when it's time to go through the list in cycle and apply all the filtering and mapping stuff?
When using a view, it will calculate the value when you actually go to use a materialized value:
val x = list.view.filter(_ > 0).map(_ * 2)
println(x.head * 2)
This only applies the filter and the map when head is called.
Does the program iterates N times on filter function and then N times
on map function?
Yes, for a List you should use withFilter instead. From withFilter doc:
Note: the difference between `c filter p` and `c withFilter p` is that
the former creates a new collection, whereas the latter only
restricts the domain of subsequent `map`, `flatMap`, `foreach`,
and `withFilter` operations.
If there is a map after filter, you can always using collect instead:
list.filter(0<).map(2*)
to
list.collect{ case x if x > 0 => 2*x }
I'm trying to understand the example here which computes Jaccard similarity between pairs of vectors in a matrix.
val aBinary = adjacencyMatrix.binarizeAs[Double]
// intersectMat holds the size of the intersection of row(a)_i n row (b)_j
val intersectMat = aBinary * aBinary.transpose
val aSumVct = aBinary.sumColVectors
val bSumVct = aBinary.sumRowVectors
//Using zip to repeat the row and column vectors values on the right hand
//for all non-zeroes on the left hand matrix
val xMat = intersectMat.zip(aSumVct).mapValues( pair => pair._2 )
val yMat = intersectMat.zip(bSumVct).mapValues( pair => pair._2 )
Why does the last comment mention non-zero values? As far as I'm aware, the ._2 function selects the second element of a pair independent of the first element. At what point are (0, x) pairs obliterated?
Yeah, I don't know anything about scalding but this seems odd. If you look at zip implementation it mentions specifically that it does an outer join to preserve zeros on either side. So it does not seem that the comment applies to how zeroes are actually treated in matrix.zip.
Besides looking at the dimension returned by zip, it really seems this line just replicates the aSumVct column vector for each column:
val xMat = intersectMat.zip(aSumVct).mapValues( pair => pair._2 )
Also I find the val bSumVct = aBinary.sumRowVectors suspicious, because it sums the matrix along the wrong dimension. It feels like something like this would be better:
val bSumVct = aBinary.tranpose.sumRowVectors
Which would conceptually be the same as aSumVct.transpose, so that at the end of the day, in the cell (i, j) of xMat + yMat we find the sum of elements of row(i) plus the sum of elements of row(j), then we subtract intersectMat to adjust for the double counting.
Edit: a little bit of googling unearthed this blog post: http://www.flavianv.me/post-15.htm. It seems the comments were related to that version where the vectors to compare are in two separate matrices that don't necessarily have the same size.