Geolocation in scala js - scala.js

I have following function in my Scala.js program:
def geo():Unit={
var window = document.defaultView
val nav = window.navigator
val geo: Geolocation = nav.geolocation
def onSuccess(p:Position) = {
println( s"latitude=${p.coords.latitude}") // Latitude
println( s"longitude=${p.coords.longitude}") // Longitude
}
def onError(p:PositionError) = println("Error")
geo.watchPosition( onSuccess _, onError _ )
}
I am calling this function from my main function. But it is printing the longitude and latitude continuously after some period of time. I wanted to print only one time. I am not able to understand what I am doing wrong here and what should I do to make it stop printing again and again ?

You can use clearWatch to stop watching the position as soon as you observed one, like this:
def geo(): Unit = {
val window = document.defaultView
val nav = window.navigator
val geo: Geolocation = nav.geolocation
var watchID: Int = 0
def onSuccess(p:Position) = {
println(s"latitude=${p.coords.latitude}") // Latitude
println(s"longitude=${p.coords.longitude}") // Longitude
geo.clearWatch(watchID) // only observe one position
}
def onError(p:PositionError) = println("Error")
watchID = geo.watchPosition(onSuccess _, onError _)
}

Related

Akka stream retry repeated result

I'm implementing a iterator to a HTTP resource, which I can recover a list of elements paged, I tried to do this with a plain Iterator, but it's a blocking implementation, and since I'm using akka it makes my dispatcher go a little crazy.
My will it's to implement the same iterator using akka-stream. The problem is I need bit different retry strategy.
The service returns a list of elements, identified by a id, and sometimes when I query for the next page, the service returns the same products on the current page.
My current algorithm is this.
var seenIds = Set.empty
var position = 0
def isProblematicPage(elements: Seq[Element]) Boolean = {
val currentIds = elements.map(_.id)
val intersection = seenIds & currentIds
val hasOnlyNewIds = intersection.isEmpty
if (hasOnlyNewIds) {
seenIds = seenIds | currentIds
}
!hasOnlyNewIds
}
def incrementPage(): Unit = {
position += 10
}
def doBackOff(attempt: Int): Unit = {
// Backoff logic
}
#tailrec
def fetchPage(attempt: Int = 0): Iterator[Element] = {
if (attempt > MaxRetries) {
incrementPage()
return Iterator.empty
}
val eventualPage = service.retrievePage(position, position + 10)
val page = Await.result(eventualPage, 5 minutes)
if (isProblematicPage(page)) {
doBackOff(attempt)
fetchPage(attempt + 1)
} else {
incrementPage()
page.iterator
}
}
I'm doing the implementation using akka-streams but I can't figure out how to accumulate the pages and test for repetition using the streams structure.
Any suggestions?
The Flow.scan method is useful in such situations.
I would start your stream with a source of positions:
type Position = Int
//0,10,20,...
def positionIterator() : Iterator[Position] = Iterator from (0,10)
val positionSource : Source[Position,_] = Source fromIterator positionIterator
This position source can then be directed to a Flow.scan which utilizes a function similar to your fetchPage (side note: you should avoid awaits as much as possible, there is a way to not have awaits in your code but that is outside the scope of your original question). The new function needs to take in the "state" of already seen Elements:
def fetchPageWithState(service : Service)
(seenEls : Set[Element], position : Position) : Set[Elements] = {
val maxRetries = 10
val seenIds = seenEls map (_.id)
#tailrec
def readPosition(attempt : Int) : Seq[Elements] = {
if(attempt > maxRetries)
Iterator.empty
else {
val eventualPage : Seq[Element] =
Await.result(service.retrievePage(position, position + 10), 5 minutes)
if(eventualPage.map(_.id).exists(seenIds.contains)) {
doBackOff(attempt)
readPosition(attempt + 1)
}
else
eventualPage
}
}//end def readPosition
seenEls ++ readPosition(0).toSet
}//end def fetchPageWithState
This can now be used within a Flow:
def fetchFlow(service : Service) : Flow[Position, Set[Element],_] =
Flow[Position].scan(Set.empty[Element])(fetchPageWithState(service))
The new Flow can be easily connected to your Position Source to create a Source of Set[Element]:
def elementsSource(service : Service) : Source[Set[Element], _] =
positionSource via fetchFlow(service)
Each new value from elementsSource will be an ever growing Set of unique Elements from fetched pages.
The Flow.scan stage was a good advice, but it lacked the feature to deal with futures, so I implemented it asynchronous version Flow.scanAsync it's now available on akka 2.4.12.
The current implementation is:
val service: WebService
val maxTries: Int
val backOff: FiniteDuration
def retry[T](zero: T, attempt: Int = 0)(f: => Future[T]): Future[T] = {
f.recoverWith {
case ex if attempt >= maxAttempts =>
Future(zero)
case ex =>
akka.pattern.after(backOff, system.scheduler)(retry(zero, attempt + 1)(f))
}
}
def isProblematicPage(lastPage: Seq[Element], currPage: Seq[Element]): Boolean = {
val lastPageIds = lastPage.map(_.id).toSet
val currPageIds = currPage.map(_.id).toSet
val intersection = lastPageIds & currPageIds
intersection.nonEmpty
}
def retrievePage(lastPage: Seq[Element], startIndex: Int): Future[Seq[Element]] = {
retry(Seq.empty) {
service.fetchPage(startIndex).map { currPage: Seq[Element] =>
if (isProblematicPage(lastPage, currPage)) throw new ProblematicPageException(startIndex)
else currPage
}
}
}
val pagesRange: Range = Range(0, maxItems, pageSize)
val scanAsyncFlow = Flow[Int].via(ScanAsync(Seq.empty)(retrievePage))
Source(pagesRange)
.via(scanAsyncFlow)
.mapConcat(identity)
.runWith(Sink.seq)
Thanks Ramon for the advice :)

scala "not a member of type parameter"

I'm trying to use Spark GraphX, and encountering what I think is a problem in how I'm using Scala. I'm a newbie to both Scala and Spark.
I create a graph by invoking my own function:
val initialGraph: Graph[VertexAttributes, Int] = sim.createGraph
VertexAttributes is a class I defined:
class VertexAttributes(var pages: List[Page], var ads: List[Ad], var step: Long, val inDegree: Int, val outDegree: Int)
extends java.io.Serializable
{
// Define alternative methods to be used as the score
def averageScore() =
{
this.ads.map(_.score).sum / this.ads.length
}
def maxScore() =
{
if(this.ads.length == 0) None else Some(this.ads.map(_.score).max)
}
// Select averageScore as the function to be used
val score = averageScore _
}
After some computations, I use the GraphX vertices() function to get the scores for each vertex:
val nodeRdd = g.vertices.map(v => if(v._2.score() == 0)(v._1 + ",'0,0,255'") else (v._1 + ",'255,0,0'"))
But this won't compile, the sbt message is:
value score is not a member of type parameter VertexAttributes
I have googled this error message, but frankly can't follow the conversation. Can anyone please explain the cause of the error and how I can fix it?
Thank you.
P.S. Below is my code for the createGraph method:
// Define a class to run the simulation
class Butterflies() extends java.io.Serializable
{
// A boolean flag to enable debug statements
var debug = true
// A boolean flag to read an edgelist file rather than compute the edges
val readEdgelistFile = true;
// Create a graph from a page file and an ad file
def createGraph(): Graph[VertexAttributes, Int] =
{
// Just needed for textFile() method to load an RDD from a textfile
// Cannot use the global Spark context because SparkContext cannot be serialized from master to worker
val sc = new SparkContext
// Parse a text file with the vertex information
val pages = sc.textFile("hdfs://ip-172-31-4-59:9000/user/butterflies/data/1K_nodes.txt")
.map { l =>
val tokens = l.split("\\s+") // split("\\s") will split on whitespace
val id = tokens(0).trim.toLong
val tokenList = tokens.last.split('|').toList
(id, tokenList)
}
println("********** NUMBER OF PAGES: " + pages.count + " **********")
// Parse a text file with the ad information
val ads = sc.textFile("hdfs://ip-172-31-4-59:9000/user/butterflies/data/1K_ads.txt")
.map { l =>
val tokens = l.split("\\s+") // split("\\s") will split on whitespace
val id = tokens(0).trim.toLong
val tokenList = tokens.last.split('|').toList
val next: VertexId = 0
val score = 0
//val vertexId: VertexId = id % 1000
val vertexId: VertexId = id
(vertexId, Ad(id, tokenList, next, score))
}
println("********** NUMBER OF ADS: " + ads.count + " **********")
// Check if we should simply read an edgelist file, or compute the edges from scratch
val edgeGraph =
if (readEdgelistFile)
{
// Create a graph from an edgelist file
GraphLoader.edgeListFile(sc, "hdfs://ip-172-31-4-59:9000/user/butterflies/data/1K_edges.txt")
}
else
{
// Create the edges between similar pages
// Create of list of all possible pairs of pages
// Check if any pair shares at least one token
// We only need the pair id's for the edgelist
val allPairs = pages.cartesian(pages).filter{ case (a, b) => a._1 < b._1 }
val similarPairs = allPairs.filter{ case (page1, page2) => page1._2.intersect(page2._2).length >= 1 }
val idOnly = similarPairs.map{ case (page1, page2) => Edge(page1._1, page2._1, 1)}
println("********** NUMBER OF EDGES: " + idOnly.count + " **********")
// Save the list of edges as a file, to be used instead of recomputing the edges every time
//idOnly.saveAsTextFile("hdfs://ip-172-31-4-59:9000/user/butterflies/data/saved_edges")
// Create a graph from an edge list RDD
Graph.fromEdges[Int, Int](idOnly, 1);
}
// Copy into a graph with nodes that have vertexAttributes
//val attributeGraph: Graph[VertexAttributes, Int] =
val attributeGraph =
edgeGraph.mapVertices{ (id, v) => new VertexAttributes(Nil, Nil, 0, 0, 0) }
// Add the node information into the graph
val nodeGraph = attributeGraph.outerJoinVertices(pages) {
(vertexId, attr, pageTokenList) =>
new VertexAttributes(List(Page(vertexId, pageTokenList.getOrElse(List.empty), 0)),
attr.ads, attr.step, attr.inDegree, attr.outDegree)
}
// Add the node degree information into the graph
val degreeGraph = nodeGraph
.outerJoinVertices(nodeGraph.inDegrees)
{
case (id, attr, inDegree) => new VertexAttributes(attr.pages, attr.ads, attr.step, inDegree.getOrElse(0), attr.outDegree)
}
.outerJoinVertices(nodeGraph.outDegrees)
{
case (id, attr, outDegree) =>
new VertexAttributes(attr.pages, attr.ads, attr.step, attr.inDegree, outDegree.getOrElse(0))
}
// Add the ads to the nodes
val adGraph = degreeGraph.outerJoinVertices(ads)
{
(vertexId, attr, ad) =>
{
if (ad.isEmpty)
{
new VertexAttributes(attr.pages, List.empty, attr.step, attr.inDegree, attr.outDegree)
}
else
{
new VertexAttributes(attr.pages, List(Ad(ad.get.id, ad.get.tokens, ad.get.next, ad.get.score)),
attr.step, attr.inDegree, attr.outDegree)
}
}
}
// Display the graph for debug only
if (debug)
{
println("********** GRAPH **********")
//printVertices(adGraph)
}
// return the generated graph
return adGraph
}
}
VertexAttributes in your code refers to a type parameter, not to the VertexAttributes class. The mistake is probably in your createGraph function. For example it may be like this:
class Sim {
def createGraph[VertexAttributes]: Graph[VertexAttributes, Int]
}
or:
class Sim[VertexAttributes] {
def createGraph: Graph[VertexAttributes, Int]
}
In both cases you have a type parameter called VertexAttributes. This is the same as if you wrote:
class Sim[T] {
def createGraph: Graph[T, Int]
}
The compiler doesn't know that T has a score method (because it doesn't). You don't need that type parameter. Just write:
class Sim {
def createGraph: Graph[VertexAttributes, Int]
}
Now VertexAttributes will refer to the class, not to the local type parameter.

Why does Future.firstCompletedOf not invoke callback on timeout?

I am doing Exercises from Learning Concurrent Programming in Scala.
For an exercise question in code comment.
Program prints proper output of HTML contents for proper URL and timeout sufficiently enough.
Program prints "Error occured" for proper URL and low timeout.
However for invalid URL "Error occured" is not printed. What is the problem with the code below?
/*
* Implement a command-line program that asks the user to input a URL of some website,
* and displays the HTML of that website. Between the time that the user hits ENTER and
* the time that the HTML is retrieved, the program should repetitively print a . to the
* standard output every 50 milliseconds, with a two seconds timeout. Use only futures
* and promises, and avoid the synchronization primitives from the previous chapters.
* You may reuse the timeout method defined in this chapter.
*/
object Excersices extends App {
val timer = new Timer()
def timeout(t: Long = 1000): Future[Unit] = {
val p = Promise[Unit]
val timer = new Timer(true)
timer.schedule(new TimerTask() {
override def run() = {
p success ()
timer cancel()
}
}, t)
p future
}
def printDot = println(".")
val taskOfPrintingDot = new TimerTask {
override def run() = printDot
}
println("Enter a URL")
val lines = io.Source.stdin.getLines()
val url = if (lines hasNext) Some(lines next) else None
timer.schedule(taskOfPrintingDot, 0L, 50.millisecond.toMillis)
val timeOut2Sec = timeout(2.second.toMillis)
val htmlContents = Future {
url map { x =>
blocking {
Source fromURL (x) mkString
}
}
}
Future.firstCompletedOf(Seq(timeOut2Sec, htmlContents)) map { x =>
timer cancel ()
x match {
case Some(x) =>
println(x)
case _ =>
println("Error occured")
}
}
Thread sleep 5000
}
As #Gábor Bakos said exception produces Failure which doesn't handled by map:
val fut = Future { Some(Source fromURL ("hhhttp://google.com")) }
scala> fut map { x => println(x) } //nothing printed
res12: scala.concurrent.Future[Unit] = scala.concurrent.impl.Promise$DefaultPromise#5e025724
To process failure - use recover method :
scala> fut recover { case failure => None } map { x => println(x) }
None
res13: scala.concurrent.Future[Unit] = scala.concurrent.impl.Promise$DefaultPromise#578afc83
In your context it's something like:
Future.firstCompletedOf(Seq(timeOut2Sec, htmlContents)) recover {case x => println("Error:" + x); None} map { x => ...}
The Complete Code after using recover as advised by #dk14:
object Exercises extends App {
val timer = new Timer()
def timeout(t: Long = 1000): Future[Unit] = {
val p = Promise[Unit]
val timer = new Timer(true)
timer.schedule(new TimerTask() {
override def run() = {
p success ()
timer cancel ()
}
}, t)
p future
}
def printDot = println(".")
val taskOfPrintingDot = new TimerTask {
override def run() = {
printDot
}
}
println("Enter a URL")
val lines = io.Source.stdin.getLines()
val url = if (lines hasNext) Some(lines next) else None
timer.schedule(taskOfPrintingDot, 0L, 50.millisecond.toMillis)
val timeOut2Sec = timeout(2.second.toMillis)
val htmlContents = Future {
url map { x =>
blocking {
Source fromURL (x) mkString
}
}
}
Future.firstCompletedOf(Seq(timeOut2Sec, htmlContents))
.recover { case x => println("Error:" + x); None }
.map { x =>
timer cancel ()
x match {
case Some(x) =>
println(x)
case _ =>
println("Timeout occurred")
}
}
Thread sleep 5000
}

Starting another Future call from within a Future.traverse

I am having trouble calling out to another function which returns a Future from within a Future.traverse. The scenario is that I have a Seq[Document] from which I need to turn them into a Future[Seq[NewsArticle]]. However, in order to do this I need to take the doc.categoryID and use it to call out to another API to build the NewsCategory, which will be returned as a Future[NewsCategory]. My problem is not knowing how to squish this in to the Future.traverse after it has returned.
Here is my scala code so far:
object NewsArticle {
def buildNewsArticles(docs:Seq[Document]):Future[Seq[NewsArticle]] = {
Future.traverse(docs) { doc =>
val categoryID = doc.catID
val title = doc.title
val pdf = doc.pdfLink
val image = doc.imageUrl
val body = doc.body
//need to call NewsCategory.buildNewsCategory(categoryID)
//but it returns a Future[NewsCategory]
//can't use for-yield because it still only yields
//a Future
future(NewsArticle( ??? ,title, pdf, image, body)
}
}
}
object NewsCategory {
def buildNewsCategory(catID:String):Future[NewsCategory] = {
// Calls external api across the wire and
// Returns a Future[NewsCategory]
}
}
// CASE CLASSES
case class NewsArticle(
category:NewsCategory,
title:String,
pdf:String,
image:String,
body:String)
case class NewsCategory(
id:String,
name:String
description:String)
Thanks for helping
It sounds like you want to create a NewsArticle instance based on the NewsCategory returned in the future from buildNewsCategory, which means that you were on the right track. I think the following will do what you want:
Future.traverse(docs) { doc =>
val categoryID = doc.catID
val title = doc.title
val pdf = doc.pdfLink
val image = doc.imageUrl
val body = doc.body
// This could be a for-yield instead as you note.
NewsCategory.buildNewsCategory(categoryID).map { category =>
NewsArticle(category, title, pdf, image, body)
}
}
Maybe try this:
object NewsArticle {
def buildNewsArticles(docs:Seq[Document]):Future[Seq[NewsArticle]] = {
Future.traverse(docs) { doc =>
val categoryID = doc.catID
val title = doc.title
val pdf = doc.pdfLink
val image = doc.imageUrl
val body = doc.body
for {
cat <- NewsCategory.buildNewsCategory(categoryID)
} yield new NewsArticle(cat, title, pdf, image, body)
}
}
}

How to fetch a set of pages as a Stream?

How can i consume a service that returns pages as a Stream of items?
Amazon S3, for example, lets you fetch the initial object listing or the next object listing from the previous one.
For example consider this code that simulates such behavior:
import math._
case class Page(number: Int)
case class Pages(pages: Seq[Page], truncated: Boolean)
class PagesService(pageSize: Int, pagesServed: Int) {
def getPages =
Pages((1 to pageSize).map(Page), pageSize < pagesServed)
def nextPages(previous: Pages) = {
val first = previous.pages.last.number + 1
val last = min(first + pageSize, pagesServed)
Pages((first to last).map(Page), last < pagesServed)
}
}
object PagesClient extends App {
val service = new PagesService(10, 100)
val first = service.getPages
assert(first.truncated)
first.pages.foreach(println(_))
val second = service.nextPages(first)
second.pages.foreach(println(_))
val book: Stream[Page] = ???
}
How could i write that last expression?
val book: Stream[Pages] = first #:: book.map(service.nextPages).takeWhile(_.pages.nonEmpty)
val pages: Stream[Page] = book.flatten(_.pages)
I do not know if it is a typo. If you mean Stream[Pages] it is simple:
val book: Stream[Pages] = first #:: book.map(x => service.nextPages(x))
If you meant Stream[Page] i.e. a stream of Page from all pages, then:
val first = service.getPages
val second = service.nextPages(first)
val books: Stream[Page] = {
val currentPages = book.iterator
val firstPages = currentPages.next.pages.iterator
def inner(current: Iterator[Page]): Stream[Page] = {
if (current.hasNext) {
current.next #:: inner(current)
} else {
val i = currentPages.next
inner(i.pages.iterator)
}
}
inner(firstPages);
}
The above basically takes a Pages, returns its Page as part of stream. If the Pages is exhausted, then goes over to the next Pages and so on.