Using xstream, how can I create a stream that only emits when it's input stream emits a new value
Here is a diagram
input -----1--1-1--2-3--3--3---5-|
output -----1-------2-3---------5-|
While the core xstream library is comprised of a few well chosen operators, additional operators are included as extras and can accessed by their path.
import xs from 'xstream';
import dropRepeats from 'xstream/extra/dropRepeats'
const stream = xs.of(1, 1, 1, 2, 3, 3, 3, 5)
.compose(dropRepeats())
stream.addListener({
next: i => console.log(i),
error: err => console.error(err),
complete: () => console.log('completed')
});
The .compose operator is used to drop the extra methods into the stream.
source:
https://github.com/staltz/xstream/blob/master/EXTRA_DOCS.md#dropRepeats
Related
I have a streaming spark app, wherein the running stream I'm removing duplicate rows using Stateful Aggregation with flatMapGroupsWithState.
But when I used forEachBatch on the stream, and used the same functions I created to remove duplicates on stream, it is treating each Batch as an independent entity, and returning duplicates only among that single Micro Batch.
Code:
case class User(name: String, userId: Integer)
case class StateClass(totalUsers: Int)
def removeDuplicates(inputData: Dataset[User]): Dataset[User] = {
inputData
.groupByKey(_.userId)
.flatMapGroupsWithState(OutputMode.Append, GroupStateTimeout.ProcessingTimeTimeout)(removeDuplicatesInternal)
}
def removeDuplicatesInternal(id: Integer, newData: Iterator[User], state: GroupState[StateClass]): Iterator[User] = {
if (state.hasTimedOut) {
state.remove() // Removing state since no same UserId in 4 hours
return Iterator()
}
if (newData.isEmpty)
return Iterator()
if (!state.exists) {
val firstUserData = newData.next()
val newState = StateClass(1) // Total count = 1 initially
state.update(newState)
state.setTimeoutDuration("4 hours")
Iterator(firstUserData) // Returning UserData first time
}
else {
val newState = StateClass(state.get.totalUsers + 1)
state.update(newState)
state.setTimeoutDuration("4 hours")
Iterator() // Returning empty since state already exists (Already sent this UserData before)
}
}
Input Stream I used is userStream.
Above functions works fine when I directly pass stream to it.
val results = removeDuplicates(userStream)
But when I do something like:
userStream
.writeStream
.foreachBatch { (batch, batchId) => writeBatch(batch) }
def writeBatch(batch: Dataset[User]): Unit = {
val distinctBatch = removeDuplicates(batch)
}
I get distinct User data only within that Micro Batch. But I want it to be distinct overall across 4 hour timeout.
For Eg:
If 1st batch has UserIds (1, 3, 5, 1), and second batch has UserIds (2, 3, 1).
Expected Behaviour:
Output: 1st Batch = (1, 3, 5) and 2nd Batch = (2)
My Output: 1st Batch = (1, 3, 5) and 2nd Batch = (2, 3, 1)
How can I enable it to use the same State throughout? Right now, it is treating each micro-batch different, and creating a separate state for each batch.
PS: Problem is not limited to getting duplicates on Stream, I need to use forEachBatch for some computations on Micro batches, and remove duplicates before writing.
For Running test script, refer this: https://ideone.com/nZ5pq2
The behavior is actually expected one.
flatMapGroupsWithState leverages state store only when the query is streaming one. (For batch query, it doesn't even create a state store because it's not necessary.) Once you call forEachBatch, the provided batch parameter is no longer continuous one across batches - consider it as a dataset from batch query, which the batch means "a" micro-batch.
So you still need to pass your streaming dataset to removeDuplicate, or have your own way to deduplicate records across batches in forEachBatch.
I have a huge text file and I have to extract only named entites in from this file. I am using Scala language and Databricks cluster for this.
val input = sc.textFile('....Mypath...').flatMap(line => line.split("""\W+"""))
val namedEnt = something(input)
Can anyone tell what to code to get named entities?
If you convert your input to a DataFrame (ex: .toDF), this is how you can get the Named Entities out:
Just an example of Spark NLP installation
spark-shell --packages JohnSnowLabs:spark-nlp:2.4.0
Actual example:
import com.johnsnowlabs.nlp.pretrained.PretrainedPipeline
import com.johnsnowlabs.nlp.SparkNLP
SparkNLP.version()
// make sure you are using the latest release 2.4.x
// Download and load the pre-trained pipeline that has NER in English
// Full list: https://github.com/JohnSnowLabs/spark-nlp-models
val pipeline = PretrainedPipeline("recognize_entities_dl", lang="en")
//Transfrom your DataFrame to a new DataFrame that has NER column
val annotation = pipeline.transform(inputDF)
// This would look something like this:
/*
+---+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
| id| text| document| sentence| token| embeddings| ner| entities|
+---+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
| 1|Google has announ...|[[document, 0, 10...|[[document, 0, 10...|[[token, 0, 5, Go...|[[word_embeddings...|[[named_entity, 0...|[[chunk, 0, 5, Go...|
| 2|Donald John Trump...|[[document, 0, 92...|[[document, 0, 92...|[[token, 0, 5, Do...|[[word_embeddings...|[[named_entity, 0...|[[chunk, 0, 16, D...|
+---+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+
*/
// This is where the results for entities are:
annotation.select("entities.result").show
Let me know if you have any questions or problems with your input data and I'll update my answer.
References:
https://github.com/JohnSnowLabs/spark-nlp
https://github.com/JohnSnowLabs/spark-nlp-models
https://github.com/JohnSnowLabs/spark-nlp-workshop
we have a scala application that read lines from text file and process them using Akka Stream. for better performance we set parallelism to 5. the problem is if the multiple lines contains the same email we only keep one of the line and treated others as duplicated and throw error. I tried to use a java concurrentHashMap to detect duplication but it didn't work, here is my code:
allIdentifiers = new ConcurrentHashMap[String, Int]()
Source(rows)
.mapAsync(config.parallelism.value) {
case (dataRow, index) => {
val eventResendResult: EitherT[Future, NonEmptyList[ResendError], ResendResult] =
for {
cleanedRow <- EitherT.cond[Future](
!allIdentifiers.containsKey(dataRow.lift(emailIndex)), {
allIdentifiers.put(dataRow.lift(emailIndex),index)
dataRow
}, {
NonEmptyList.of(
DuplicatedError(
s"Duplicated record at row $index",
List(identifier)
)
)
}
)
_ = logger.debug(
LoggingMessage(
requestId = RequestId(),
message = s"allIdentifiers: $allIdentifiers"
)
)
... more process step ...
} yield foldResponses(sent)
eventResendResult
.leftMap(errors => ResendResult(errors.toList, List.empty))
.merge
}
}
.runWith(Sink.reduce { (result1: ResendResult, result2: ResendResult) =>
ResendResult(
result1.errors ++ result2.errors,
result1.results ++ result2.results
)
})
we have config.parallelism.value set to 5, means any moment it'll process up to 5 lines concurrently. what I observed is if there are duplicated lines right next to each other, it didn't work, example:
line 0 contains email1
line 1 contains email1
line 2 contains email2
line 3 contains email2
line 4 contains email3
from the log i see the concurrentHashMap was populated with entries, but all lines passed the duplication detect and moved to the next process step.
so Akka Stream's parallelism is not the same thing as java's multithreads? how can i detect duplicated line in this case?
The problem is in the following snippet:
cleanedRow <- EitherT.cond[Future](
!allIdentifiers.containsKey(dataRow.lift(emailIndex)), {
allIdentifiers.put(dataRow.lift(emailIndex),index)
dataRow
}, {
NonEmptyList.of(
DuplicatedError(
s"Duplicated record at row $index",
List(identifier)
)
)
}
)
In particular: imagine two threads simultaneously processing an email which should be deduplicated. It is possible for the following to happen (in order)
The first thread checks containsKey and finds the email is not in the map
The second thread checks containsKey and finds the email is not in the map
The first thread adds the email to the map (based on results from step 1.) and passes the email through
The second thread adds the email to the map (based on results from step 3.) and passes the email through
In other words: you need to atomically check the map for the key and update it. This is a pretty common sort of thing to want, so it is exactly what ConcurrentHashMap's put does: it updates the value at the key and returns the previous value it replaced, if there was one.
I'm not too familiar with the combinators in Cats, so the following might not be idiomatic. However, note how it inserts and checks for a previous value in one atomic step.
cleanedRow <- EitherT(Future.successful {
val previous = allIdentifiers.put(dataRow.lift(emailIndex), index)
Either.cond(
previous != null,
dataRow,
NonEmptyList.of(
DuplicatedError(
s"Duplicated record at row $index",
List(identifier)
)
)
)
})
UPDATE
I think I've figured out the solution. I explain it in this video. Basically, use timeoutWith, and some tricks with zip (within zip).
https://youtu.be/0A7C1oJSJDk
If I have a single observable like this:
A-1-2--B-3-4-5-C--D--6-7-E
I want to put the "numbers" as lower priority; it should wait until the "letters" is filled up (a group of 2 for example) OR a timeout is reached, and then it can emit. Maybe the following illustration (of the desired result) can help:
A------B-1-----C--D-2----E-3-4-5-6-7
I've been experimenting with some ideas... one of them: first step is to split that stream (groupBy), one containing letters, and the other containing numbers..., then "something in the middle" happen..., and finally those two (sub)streams get merged.
It's that "something in the middle" what I'm trying to figure out.
How to achieve it? Is that even possible with RxJS (ver 5.5.6)? If not, what's the closest one? I mean, what I want to avoid is having the "numbers" flooding the stream, and not giving enough chance for the "letters" to be processed in timely manner.
Probably this video I made of my efforts so far can clarify as well:
Original problem statement: https://www.youtube.com/watch?v=mEmU4JK5Tic
So far: https://www.youtube.com/watch?v=HWDI9wpVxJk&feature=youtu.be
The problem with my solution so far (delaying each emission in "numbers" substream using .delay) is suboptimal, because it keeps clocking at slow pace (10 seconds) even after the "characters" (sub)stream has ended (not completed -- no clear boundary here -- just not getting more value for indeterminate amount of time). What I really need is, to have the "numbers" substream raise its pace (to 2 seconds) once that happen.
Unfortunately I don't know RxJs5 that much and use xstream myself (authored by one of the contributor to RxJS5) which is a little bit simpler in terms of the number of operators.
With this I crafted the following example:
(Note: the operators are pretty much the same as in Rx5, the main difference is with flatten wich is more or less like switch but seems to handle synchronous streams differently).
const xs = require("xstream").default;
const input$ = xs.of("A",1,2,"B",3,4,5,"C","D",6,7,"E");
const initialState = { $: xs.never(), count: 0, buffer: [] };
const state$ = input$
.fold((state, value) => {
const t = typeof value;
if (t === "string") {
return {
...state,
$: xs.of(value),
count: state.count + 1
};
}
if (state.count >= 2) {
const l = state.buffer.length;
return {
...state,
$: l > 0 ? xs.of(state.buffer[0]) : xs.of(value) ,
count: 0,
buffer: state.buffer.slice(1).concat(value)
};
}
return {
...state,
$: xs.never(),
buffer: state.buffer.concat(value),
};
}, initialState);
xs
.merge(
state$
.map(s => s.$),
state$
.last()
.map(s => xs.of.apply(xs, s.buffer))
)
.flatten()
.subscribe({
next: console.log
});
Which gives me the result you are looking for.
It works by folding the stream on itself, looking at the type of values and emitting a new stream depending on it. When you need to wait because not enough letters were dispatched I emit an emptystream (emits no value, no errors, no complete) as a "placeholder".
You could instead of emitting this empty stream emit something like
xs.empty().endsWith(xs.periodic(timeout)).last().mapTo(value):
// stream that will emit a value only after a specified timeout.
// Because the streams are **not** flattened concurrently you can
// use this as a "pending" stream that may or may not be eventually
// consumed
where value is the last received number in order to implement timeout related conditions however you would then need to introduce some kind of reflexivity with either a Subject in Rx or xs.imitate with xstream because you would need to notify your state that your "pending" stream has been consumed wich makes the communication bi-directionnal whereas streams / observables are unidirectionnal.
The key here the use of timeoutWith, to switch to the more aggresive "pacer", when the "events" kicks in. In this case the "event" is "idle detected in the higher-priority stream".
The video: https://youtu.be/0A7C1oJSJDk
I have a ReactiveList with keywords. The user can add or remove keyword from that list. The app needs to verify if the user typed one of the keywords.
There was already a similar post but it doesn't take in account a flexible list:
Using Reactive Extension for certain KeyPress sequences?
var keyElements = new ReactiveList<KeyElement>();
IObservable<IObservable<int>> rangeToMax = Observable.Merge(keyElements.ItemsAdded, keyElements.ItemsRemoved).Select(obs => Observable.Range(2, keyElements.Select(ke => ke.KeyTrigger.Length).Max()));
IObservable<IObservable<string>> detectedKeyTrigger = rangeToMax
.Select(n => _keyPressed.Buffer(n, 1))
.Merge().Where(m => keyElements.Where(ke => ke.KeyTrigger == m).Any());
//Here I want to end up with IObservable<string> instead of IObservable<IObservable<string>>
I can get rid of the outer IObservable by reassigning the detectedKeyTrigger each time an element in the reactive list changes, but then I lose all my subscriptions.
So, how can I end up with just an Observable of strings?
First off, both Max and Any have overloads which takes a selector and a predicate respectively. This negates the need of the Select.
Next, I changed the Observable.Merge to use the Changed property of ReactiveList which is the Rx version of INotifyCollectionChanged. I also changed the Select to produce an IEnumerable of ints instead; it just felt more Rightâ„¢.
var keyElements = new ReactiveList<KeyElement>();
IObservable<IEnumerable<int>> rangeToMax = keyElements.Changed
.Select(_ => Enumerable.Range(2, keyElements.Max(keyElement => keyElement.KeyTrigger.Length));
IObservable<IObservable<string>> detectedKeyTrigger = rangeToMax.
.Select(range => range
.Select(length => _keyPressed.Buffer(length, 1).Select(chars => new string(chars.ToArray()))) // 1
.Merge() // 2
.Where(m => keyElements.Any(ke => ke.KeyTrigger == m)) // 3
.Switch(); // 4
Create an IObservable<string> which emits the last n characters typed by the user. Create such an observable for each of the possible lengths of an combo
Merge the observables in the IEnumerable<IObservable<string>> into one Observable<string>
Only let strings which mach one of the KeyTriggers through
As rangeToMax.Select produces an IObservable<IObservable<string>> we use Switch to only subscribe to the most recent IObservable<string> the IObservable<IObservable<string>> produces.