Get the first elements (take function) of a DStream - scala

I look for a way to retrieve the first elements of a DStream created as:
val dstream = ssc.textFileStream(args(1)).map(x => x.split(",").map(_.toDouble))
Unfortunately, there is no take function (as on RDD) on a dstream //dstream.take(2) !!!
Could someone has any idea on how to do it ?! thanks

You can use transform method in the DStream object then take n elements of the input RDD and save it to a list, then filter the original RDD to be contained in this list. This will return a new DStream contains n elements.
val n = 10
val partOfResult = dstream.transform(rdd => {
val list = rdd.take(n)
rdd.filter(list.contains)
})
partOfResult.print

The previous suggested solution did not compile for me as the take() method returns an Array, which is not serializable thus Spark streaming will fail with a java.io.NotSerializableException.
A simple variation on the previous code that worked for me:
val n = 10
val partOfResult = dstream.transform(rdd => {
rdd.filter(rdd.take(n).toList.contains)
})
partOfResult.print

Sharing a java based solution that is working for me. The idea is to use a custom function, which can send the top row from a sorted RDD.
someData.transform(
rdd ->
{
JavaRDD<CryptoDto> result =
rdd.keyBy(Recommendations.volumeAsKey)
.sortByKey(new CryptoComparator()).values().zipWithIndex()
.map(row ->{
CryptoDto purchaseCrypto = new CryptoDto();
purchaseCrypto.setBuyIndicator(row._2 + 1L);
purchaseCrypto.setName(row._1.getName());
purchaseCrypto.setVolume(row._1.getVolume());
purchaseCrypto.setProfit(row._1.getProfit());
purchaseCrypto.setClose(row._1.getClose());
return purchaseCrypto;
}
).filter(Recommendations.selectTopinSortedRdd);
return result;
}).print();
The custom function selectTopinSortedRdd looks like below:
public static Function<CryptoDto, Boolean> selectTopInSortedRdd = new Function<CryptoDto, Boolean>() {
private static final long serialVersionUID = 1L;
#Override
public Boolean call(CryptoDto value) throws Exception {
if (value.getBuyIndicator() == 1L) {
System.out.println("Value of buyIndicator :" + value.getBuyIndicator());
return true;
}
else {
return false;
}
}
};
It basically compares all incoming elements, and returns true only for the first record from the sorted RDD.

This seems to be always an issue with DStreams as well as regular RDDs.
If you don't want (or can't) to use .take() (especially in DStreams) you can think outside the box here and just use reduce instead. That is a valid function for both DStreams as well as RDD's.
Think about it. If you use reduce like this (Python example):
.reduce( lambda x, y : x)
Then what happens is: For every 2 elements you pass in, always return only the first. So if you have a million elements in your RDD or DStream it will shrink it to one element in the end which is the very first one in your RDD or DStream.
Simple and clean.
However keep in mind that .reduce() does not take order into consideration. However you can easily overcome this with a custom function instead.
Example: Let's assume your data looks like this x = (1, [1,2,3]) and y = (2, [1,2]). A tuple x where the 2nd element is a list. If you are sorting by the longest list for example then your code could look like below maybe (adapt as needed):
def your_reduce(x,y):
if len(x[1]) > len(y[1]):
return x
else:
return y
yourNewRDD = yourOldRDD.reduce(your_reduce)
Accordingly you will get '(1, [1,2,3])' as that has the longer list. There you go!
This has caused me some headaches in the past until I finally tried this. Hopefully this helps.

Related

Return keyword inside the inline function in Scala

I've heard about to not use Return keyword in Scala, because it might change the flow of the program like;
// this will return only 2 because of return keyword
List(1, 2, 3).map(value => return value * 2)
Here is my case; I've recursive case class, and using DFS to do some calculation on it. So, maximum depth could be 5. Here is the model;
case class Line(
lines: Option[Seq[Line]],
balls: Option[Seq[Ball]],
op: Option[String]
)
I'm using DFS approach to search this recursive model. But at some point, if a special value exist in the data, I want to stop iterating over the data left and return the result directly instead. Here is an example;
Line(
lines = Some(Seq(
Line(None, Some(Seq(Ball(1), Ball(3))), Some("s")),
Line(None, Some(Seq(Ball(5), Ball(2))), Some("d")),
Line(None, Some(Seq(Ball(9))), None)
)),
balls = None,
None
)
In this data, I want to return as like "NOT_OKAY" if I run into the Ball(5), which means I do not need to any operation on Ball(2) and Ball(9) anymore. Otherwise, I will apply a calculation to the each Ball(x) with the given operator.
I'm using this sort of DFS method;
def calculate(line: Line) = {
// string here is the actual result that I want, Boolean just keeps if there is a data that I don't want
def dfs(line: Line): (String, Boolean) = {
line.balls.map{_.map { ball =>
val result = someCalculationOnBall(ball)
// return keyword here because i don't want to iterate values left in the balls
if (result == "NOTREQUIRED") return ("NOT_OKAY", true)
("OKAY", false)
}}.getOrElse(
line.lines.map{_.map{ subLine =>
val groupResult = dfs(subLine)
// here is I'm using return because I want to return the result directly instead of iterating the values left in the lines
if (groupResult._2) return ("NOT_OKAY", true)
("OKAY", false)
}}
)
}
.... rest of the thing
}
In this case, I'm using return keyword in the inline functions, and change the behaviour of the inner map functions completely. I've just read somethings about not using return keyword in Scala, but couldn't make sure this will create a problem or not. Because in my case, I don't want to do any calculation if I run into a value that I don't want to see. Also I couldn't find the functional way to get rid of return keyword.
Is there any side effect like stack exception etc. to use return keyword here? I'm always open to the alternative ways. Thank you so much!

Creating Seq after waiting for all results from map/foreach in Scala

I am trying to loop over inputs and process them to produce scores.
Just for the first input, I want to do some processing that takes a while.
The function ends up returning just the values from the 'else' part. The 'if' part is done executing after the function returns the value.
I am new to Scala and understand the behavior but not sure how to fix it.
I've tried inputs.zipWithIndex.map instead of foreach but the result is the same.
def getscores(
inputs: inputs
): Future[Seq[scoreInfo]] = {
var scores: Seq[scoreInfo] = Seq()
inputs.zipWithIndex.foreach {
case (f, i) => {
if (i == 0) {
// long operation that returns Future[Option[scoreInfo]]
getgeoscore(f).foreach(gso => {
gso.foreach(score => {
scores = scores.:+(score)
})
})
} else {
scores = scores.:+(
scoreInfo(
id = "",
score = 5
)
)
}
}
}
Future {
scores
}
}
For what you need, I would drop the mutable variable and replace foreach with map to obtain an immutable list of Futures and recover to handle exceptions, followed by a sequence like below:
def getScores(inputs: Inputs): Future[List[ScoreInfo]] = Future.sequence(
inputs.zipWithIndex.map{ case (input, idx) =>
if (idx == 0)
getGeoScore(input).map(_.getOrElse(defaultScore)).recover{ case e => errorHandling(e) }
else
Future.successful(ScoreInfo("", 5))
})
To capture/print the result, one way is to use onComplete:
getScores(inputs).onComplete(println)
The part your missing is understanding a tricky element of concurrency, and that is that the order of execution when using multiple futures is not guaranteed.
If your block here is long running, it will take a while before appending the score to scores
// long operation that returns Future[Option[scoreInfo]]
getgeoscore(f).foreach(gso => {
gso.foreach(score => {
// stick a println("here") in here to see what happens, for demonstration purposes only
scores = scores.:+(score)
})
})
Since that executes concurrently, your getscores function will also simultaneously continue its work iterating over the rest of inputs in your zipWithindex. This iteration, especially since it's trivial work, likely finishes well before the long-running getgeoscore(f) completes the execution of the Future it scheduled, and the code will exit the function, moving on to whatever code is next after you called getscores
val futureScores: Future[Seq[scoreInfo]] = getScores(inputs)
futureScores.onComplete{
case Success(scoreInfoSeq) => println(s"Here's the scores: ${scoreInfoSeq.mkString(",")}"
}
//a this point the call to getgeoscore(f) could still be running and finish later, but you will never know
doSomeOtherWork()
Now to clean this up, since you can run a zipWithIndex on your inputs parameter, I assume you mean it's something like a inputs:Seq[Input]. If all you want to do is operate on the first input, then use the head function to only retrieve the first option, so getgeoscores(inputs.head) , you don't need the rest of the code you have there.
Also, as a note, if using Scala, get out of the habit of using mutable vars, especially if you're working with concurrency. Scala is built around supporting immutability, so if you find yourself wanting to use a var , try using a val and look up how to work with the Scala's collection library to make it work.
In general, that is when you have several concurrent futures, I would say Leo's answer describes the right way to do it. However, you want only the first element transformed by a long running operation. So you can use the future return by the respective function and append the other elements when the long running call returns by mapping the future result:
def getscores(inputs: Inputs): Future[Seq[ScoreInfo]] =
getgeoscore(inputs.head)
.map { optInfo =>
optInfo ++ inputs.tail.map(_ => scoreInfo(id = "", score = 5))
}
So you neither need zipWithIndex nor do you need an additional future or join the results of several futures with sequence. Mapping the future just gives you a new future with the result transformed by the function passed to .map().

How to combine the elements of an arbitrary number of dependent Fluxes?

In the non reactive world the following code snippet is nothing special:
interface Enhancer {
Result enhance(Result result);
}
Result result = Result.empty();
result = fooEnhancer.enhance(result);
result = barEnhancer.enhance(result);
result = bazEnhancer.enhance(result);
There are three different Enhancer implementations taking a Result instance, enhancing it and returning the enhanced result. Let's assume the order of the enhancer calls matters.
Now what if these methods are replaced by reactive variants returning a Flux<Result>? Because the methods depend on the result(s) of the preceding method, we cannot use combineLatest here.
A possible solution could be:
Flux.just(Result.empty())
.switchMap(result -> first(result)
.switchMap(result -> second(result)
.switchMap(result -> third(result))))
.subscribe(result -> doSomethingWith(result));
Note that the switchMap calls are nested. As we are only interested in the final result, we let switchMap switch to the next flux as soon as new events are emitted in preceding fluxes.
Now let's try to do it with a dynamic number of fluxes. Non reactive (without fluxes), this would again be nothing special:
List<Enhancer> enhancers = <ordered list of different Enhancer impls>;
Result result = Result.empty();
for (Enhancer enhancer : enhancers) {
result = enhancer.enhance(result);
}
But how can I generalize the above reactive example with three fluxes to deal with an arbitrary number of fluxes?
I found a solution using recursion:
#FunctionalInterface
interface FluxProvider {
Flux<Result> get(Result result);
}
// recursive method creating the final Flux
private Flux<Result> cascadingSwitchMap(Result input, List<FluxProvider> fluxProviders, int idx) {
if (idx < fluxProviders.size()) {
return fluxProviders.get(idx).get(input).switchMap(result -> cascadingSwitchMap(result, fluxProviders, idx + 1));
}
return Flux.just(input);
}
// code using the recursive method
List<FluxProvider> fluxProviders = new ArrayList<>();
fluxProviders.add(fooEnhancer::enhance);
fluxProviders.add(barEnhancer::enhance);
fluxProviders.add(bazEnhancer::enhance);
cascadingSwitchMap(Result.empty(), fluxProviders, 0)
.subscribe(result -> doSomethingWith(result));
But maybe there is a more elegant solution using an operator/feature of project-reactor. Does anybody know such a feature? In fact, the requirement doesn't seem to be such an unusual one, is it?
switchMap feels inappropriate here. If you have a List<Enhancer> by the time the Flux pipeline is declared, why not apply a logic close to what you had in imperative style:
List<Enhancer> enhancers = <ordered list of different Enhancer impls>;
Mono<Result> resultMono = Mono.just(Result.empty)
for (Enhancer enhancer : enhancers) {
resultMono = resultMono.map(enhancer::enhance); //previousValue -> enhancer.enhance(previousValue)
}
return resultMono;
That can even be performed later at subscription time for even more dynamic resolution of the enhancers by wrapping the whole code above in a Mono.defer(() -> {...}) block.

how to update a value in dataframe and drop a row on this basis of a given value in scala

I need to update the value and if the value is zero then drop that row. Here is the snapshot.
val net = sc.accumulator(0.0)
df1.foreach(x=> {net += calculate(df2, x)})
def calculate(df2:DataFrame, x : Row):Double = {
var pro:Double = 0.0
df2.foreach(y => {if(xxx){ do some stuff and update the y.getLong(2) value }
else if(yyy){ do some stuff and update the y.getLong(2) value}
if(y.getLong(2) == 0) {drop this row from df2} })
return pro;
}
Any suggestions? Thanks.
You cannot change the DataFrame or RDD. They are read only for a reason. But you can create a new one and use transformations by all the means available. So when you want to change for example contents of a column in dataframe just add new column with updated contents by using functions like this:
df.withComlumn(...)
DataFrames are immutable, you can not update a value but rather create new DF every time.
Can you reframe your use case, its not very clear what you are trying to achieve with the above snippet (Not able to understand the use of accumulator) ?
You can rather try df2.withColumn(...) and use your udf here.

Way to Extract list of elements from Scala list

I have standard list of objects which is used for the some analysis. The analysis generates a list of Strings and i need to look through the standard list of objects and retrieve objects with same name.
case class TestObj(name:String,positions:List[Int],present:Boolean)
val stdLis:List[TestObj]
//analysis generates a list of strings
var generatedLis:List[String]
//list to save objects found in standard list
val lisBuf = new ListBuffer[TestObj]()
//my current way
generatedLis.foreach{i=>
val temp = stdLis.filter(p=>p.name.equalsIgnoreCase(i))
if(temp.size==1){
lisBuf.append(temp(0))
}
}
Is there any other way to achieve this. Like having an custom indexof method that over rides and looks for the name instead of the whole object or something. I have not tried that approach as i am not sure about it.
stdLis.filter(testObj => generatedLis.exists(_.equalsIgnoreCase(testObj.name)))
use filter to filter elements from 'stdLis' per predicate
use exists to check if 'generatedLis' has a value of ....
Don't use mutable containers to filter sequences.
Naive solution:
val lisBuf =
for {
str <- generatedLis
temp = stdLis.filter(_.name.equalsIgnoreCase(str))
if temp.size == 1
} yield temp(0)
if we discard condition temp.size == 1 (i'm not sure it is legal or not):
val lisBuf = stdLis.filter(s => generatedLis.exists(_.equalsIgnoreCase(s.name)))