With Futures, If I have a list of Futures. I can convert them into a single future by doing Future.sequence. but on the cats.effect.IO there is no IO.sequence method.
So if I have a List[IO[Long]] how do I convert it into IO[List[Long]]
Something like that what are you looking for?:
import cats.instances.list._
import cats.syntax.parallel._
val listIo : List[IO[Long]] = ???
listIo.parSequence
Related
I have the following list from my configuration:
val markets = Configuration.getStringList("markets");
To create a sequence out of it I write this code:
JavaConverters.asScalaIteratorConverter(markets.iterator()).asScala.toSeq
I wish I could do it in a less verbose way, such as:
markets.toSeq
And then from that list I get the sequence. I will have more configuration in the near future; is there a solution that provides this kind of simplicity?
I want a sequence regardless of the configuration library I am using. I don't want to have the stated verbose solution with the JavaConverters.
JavaConversions is deprecated since Scala 2.12.0. Use JavaConverters; you can import scala.collection.JavaConverters._ to make it less verbose:
import scala.collection.JavaConverters._
val javaList = java.util.Arrays.asList("one", "two")
val scalaSeq = javaList.asScala.toSeq
Yes. Just import implicit conversions:
import java.util
import scala.collection.JavaConversions._
val jlist = new util.ArrayList[String]()
jlist.toSeq
I am trying to execute something like this
scala> import scala.sys.process._
scala> Process("cat temp.txt")!
I will be doing this say in a Play Framework REST handler. I want this to return a future object so that I can map/flatMap on it and do further processing when the shell is done executing. How do I do that?
I think all you need is this.
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
import scala.sys.process._
val fs = Future("cat temp.txt".!!) // Future[String] = Future(<not completed>)
The file contents becomes one long string but you can split() it in the map() operation.
I am fairly new to spray, and I would like to extract the result returned by this API to a list variable.
What would be the best way to achieve this?
If we ignore error handling, you could do it like so:
import scala.io.Source
import spray.json._
import DefaultJsonProtocol._
...
val source = Source.fromURL("https://api.guildwars2.com/v2/items")
val json = source.mkString.parseJson
val list = json.convertTo[List[Int]]
syntax.IdOps seems to have no "companion" object to import its implicits (see, selfless pattern), so it's hard to use that in REPL for example:
scala> val selfish = new scalaz.syntax.ToIdOps{} //I don't want to do this, it feels wrong
selfish: scalaz.syntax.ToIdOps = $anon$1#1adfe356
scala> import selfish._
import selfish._
Is there a way to import it?
https://github.com/scalaz/scalaz/blob/v7.1.2/core/src/main/scala/scalaz/syntax/Syntax.scala#L117
You can use scalaz.syntax.id instead of new scalaz.syntax.ToIdOps{}
import scalaz.syntax.id._
Here is code I'm trying out for reduceByKey :
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._
import org.apache.spark.SparkContext
import scala.math.random
import org.apache.spark._
import org.apache.spark.storage.StorageLevel
object MapReduce {
def main(args: Array[String]) {
val sc = new SparkContext("local[4]" , "")
val file = sc.textFile("c:/data-files/myfile.txt")
val counts = file.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _)
}
}
Is giving compiler error : "cannot resolve symbol reduceByKey"
When I hover over implementation of reduceByKey it gives three possible implementations so it appears it is being found ?:
You need to add the following import to your file:
import org.apache.spark.SparkContext._
Spark documentation:
"In Scala, these operations are automatically available on RDDs containing Tuple2 objects (the built-in
tuples in the language, created by simply writing (a, b)), as long as you import org.apache.spark.SparkContext._ in your program to enable Spark’s implicit conversions. The key-value pair operations are available in the PairRDDFunctions class, which automatically wraps around an RDD of tuples if you import the conversions."
It seems as if the documented behavior has changed in Spark 1.4.x. To have IntelliJ recognize the implicit conversions you now have to add the following import:
import org.apache.spark.rdd.RDD._
I have noticed that at times IJ is unable to resolve methods that are imported implicitly via PairRDDFunctions https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala .
The methods implicitly imported include the reduceByKey* and reduceByKeyAndWindow* methods. I do not have a general solution at this time -except that yes you can safely ignore the intellisense errors