spark scala - replace text if exists in list - scala

I have 2 datasets.
One is a dataframe with a bunch of data, one column has comments (a string).
The other is a list of words.
If a comment contains a word in the list, I want to replace the word in the comment with ##### and return the comment in full with the replaced words.
Here's some sample data:
CommentSample.txt
1 A badword small town
2 "Love the truck, though rattle is annoying."
3 Love the paint!
4
5 "Like that you added the ""oh badword2"" handle to passenger side."
6 "badword you. specific enough for you, badword3?"
7 This car is a piece if badword2
ProfanitySample.txt
badword
badword2
badword3
Here's my code so far:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
case class Response(UniqueID: Int, Comment: String)
val response = sc.textFile("file:/data/CommentSample.txt").map(_.split("\t")).filter(_.size == 2).map(r => Response(r(0).trim.toInt, r(1).trim.toString, r(10).trim.toInt)).toDF()
var profanity = sc.textFile("file:/data/ProfanitySample.txt").map(x => (x.toLowerCase())).toArray();
def replaceProfanity(s: String): String = {
val l = s.toLowerCase()
val r = "#####"
if(profanity.contains(s))
r
else
s
}
def processComment(s: String): String = {
val commentWords = sc.parallelize(s.split(' '))
commentWords.foreach(replaceProfanity)
commentWords.collect().mkString(" ")
}
response.select(processComment("Comment")).show(100)
It compiles, it runs, but the words are not replaced.
I don't know how to debug in scala.
I'm totally new! This is my first project ever!
Many thanks for any pointers.
-M

First, I think the usecase you describe here won't benefit much from the use of DataFrames - it's simpler to implement using RDDs only (DataFrames are mostly convenient when your transformations can easily be described using SQL, which isn't the case here).
So - here's a possible implementation using RDDs. This assumes the list of profanities isn't too large (i.e. up to ~thousands), so we can collect it into non-distributed memory. If that's not the case, a different approach (involving a join) might be needed.
case class Response(UniqueID: Int, Comment: String)
val mask = "#####"
val responses: RDD[Response] = sc.textFile("file:/data/CommentSample.txt").map(_.split("\t")).filter(_.size == 2).map(r => Response(r(0).trim.toInt, r(1).trim))
val profanities: Array[String] = sc.textFile("file:/data/ProfanitySample.txt").collect()
val result = responses.map(r => {
// using foldLeft here means we'll replace profanities one by one,
// with the result of each replace as the input of the next,
// starting with the original comment
profanities.foldLeft(r.Comment)({
case (updatedComment, profanity) => updatedComment.replaceAll(s"(?i)\\b$profanity\\b", mask)
})
})
result.take(10).foreach(println) // just printing some examples...
Note that the case-insensitivity and the "words only" limitations are implemented in the regex itself: "(?i)\\bSomeWord\\b".

Related

How can I take the highest ranking filter condition that ends up matching in a dataframe?

Wording of my question might be confusing so let me explain. Say I have an array of strings. They are ranked in order of the best case scenario match. So at index 0 we want this to always exist in the dataframe column, but if it doesn't then index 1 is the next best option. I have written this logic like this, but I don't feel like this is the most efficient way to have done this. Is there another way of doing it that is better?
The datasets are quite small, but I fear this can't really scale very well.
val df = spark.createDataFrame(data)
val nameArray = Array[String]("Name", "Name%", "%Name%", "Person Name", "Person Name%", "%Person Name%")
nameArray.foreach(x => {
val nameDf = df.where("text like '" + x + "'")
if(nameDf.count() > 0){
nameDf.show(1)
break()
}
})
If values are order according to preference from left (highest precedence) to right (lowest precedence) and lower precedence patterns already cover higher precedence ones (it doesn't look like it is the case in your example) you generate expression like this
import org.apache.spark.sql._
def matched(df: DataFrame, nameArray: Seq[String], c: String = "text") = {
val matchIdx = nameArray.zipWithIndex.foldRight(lit(-1)){
case ((s, i), acc) => when(col(c) like s, lit(i)).otherwise(acc)
}
df.select(max(matchIdx)).first match {
case Row(-1) => None // No pattern matches all records
case Row(i: Int) => Some(nameArray(i))
}
}
Usage examples:
matched(Seq("Some Name", "Name", "Name Surname").toDF("text"), Seq("Name", "Name%", "%Name%"))
// Option[String] = Some(%Name%)
There are two advantages of this method:
Only one action is required.
Pattern matching can be short circuited.
If pre-conditions are not satisfied you can
import org.apache.spark.sql.functions._
val unmatchedCount: Map[String, Long] = df.select(
nameArray.map(s => count(when(not($"text" like s), 1)).alias(s)): _*
).first.getValuesMap[Long](nameArray)
Unlike the first approach it will check all patterns, but it requires only one action.

Spark: Transforming JSON files to correct format

I've 100+ million records stored in files with the following JSON structure (real data has way more columns, rows and is also nested)
{"id":"2-2-3","key":"value"}{"id":"2-2-3","key":"value"}{"id":"2-2-3","key":"value"}{"id":"2-2-3","key":"value"}{"id":"2-2-3","key":"value"}
The sqlContext.read.json function is unable to parse this since the records aren't on multiple lines but on one big line. The solution below solves this problem, but is a big performance killer. What would be the best way, performance wise, to handle this issue in Apache Spark?
val rdd = sc.wholeTextFiles("s3://some-bucket/**/*")
val validJSON = rdd.flatMap(_._2.replace("}{", "}\n{").split("\n"))
val df = sqlContext.read.json(validJSON)
df.count()
df.select("id").show()
This is a riff on Antot's answer, which should handle nested JSON
input.toVector
.foldLeft((false, Vector.empty[Char], Vector.empty[String])) {
case ((true, charAccum, strAccum), '{') => (false, Vector('{'), strAccum :+ charAccum.mkString);
case ((_, charAccum, strAccum), '}') => (true, charAccum :+ '}', strAccum);
case ((_, charAccum, strAccum), char) => (false, charAccum :+ char, strAccum)
}
._3
Basically what it does is split the data into a Vector[Char], and uses foldLeft to aggregate the input into substrings. The trick is to keep track of just enough information about the previous character to figure out if a { marks the start of a new object.
I used this input to test it (basically the OP's sample input, with a nested object thrown in):
val input = """{"id":"2-2-3","key":{ "test": "value"}}{"id":"2-2-3","key":"value"}{"id":"2-2-3","key":"value"}{"id":"2-2-3","key":"value"}{"id":"2-2-3","key":"value"}"""
And got this result, which looks good:
Vector({"id":"2-2-3","key":{ "test": "value"}},
{"id":"2-2-3","key":"value"},
{"id":"2-2-3","key":"value"},
{"id":"2-2-3","key":"value"})
The problem with the original approach is the call _._2.replace("}{", "}\n{", which creates another huge string from the input one, with new line chars inserted, which is then split once again into an array.
An improvement is possible by minimizing the creation of intermediate strings and retrieving the target ones as soon as possible. For this, we can play a bit with substrings:
val validJson = rdd.flatMap(rawJson => {
// functions extracted to make it more readable.
def nextObjectStartIndex(fromIndex: Int):Int = rawJson._2.indexOf('{', fromIndex)
def currObjectEndIndex(fromIndex: Int): Int = rawJson._2.indexOf('}', fromIndex)
def extractObject(fromIndex: Int, toIndex: Int): String = rawJson._2.substring(fromIndex, toIndex + 1)
// the resulting strings are put in a local buffer
val buffer = new ListBuffer[String]()
// init the scanning of the input string
var posStartNextObject = nextObjectStartIndex(0)
// main loop terminates when there are no more '{' chars
while (posStartNextObject != -1) {
val posEndObject = currObjectEndIndex(posStartNextObject)
val extractedObject = extractObject(posStartNextObject, posEndObject)
posStartNextObject = nextObjectStartIndex(posEndObject)
buffer += extractedObject
}
buffer
})
Please note that this approach would work only if the objects in the input JSON are not nested, supposing that all curly braces separate objects of same level.

how to find a line in a text with specific term in spark scala

I couldnt find a response for this in spark scala,
please look at the detail,
I have an output text that contains list of topic with their weight like this:(this has been achieved using lda on a document)
TOPIC_0;connection;0.030922248292319265
TOPIC_0;pragmatic;0.02690878152282403
TOPIC_0;Originator;0.02443295327258558
TOPIC_0;check;0.022290036662386385
TOPIC_0;input;0.020578378303486064
TOPIC_0;character;0.019718375317755072
TOPIC_0;wide;0.017389396600966833
TOPIC_0;load;0.016898979702795396
TOPIC_0;Pretty;0.014923624938546124
TOPIC_0;soon;0.014731449663492822
I want to go through each topic and find the first sentence related to this topic in a file.
I tried something like this but I can not make my mind about this to filter:
topic.foreach { case (term, weight) =>
val filePath = "data/20_news/sci.BusinessandFinance/14147"
val lines = sc.textFile(filePath)
val words = lines.flatMap(x => x.split(' '))
val sentence = words.filter(w => words.contains(term))
}
the last line for filtering is incorrect,
for example:
my text file is like this:
input for the program should be checked. the connection between two part is pretty simple.
so it should extract the first sentence for topic :"input"
any help or idea is appreciated
I think you are filtering on your list of words and you should be filtering on lines.
This code: words.contains(term) doesn't really make sense since it returns true if the term appears in any of the words.
It would make more sense to write something like this:
w.contains(term)
So that at least your filter would only return the words that match the term.
However what you really want is to see if the line (ie sentence) contains the term.
topic.foreach { case (term, weight) =>
val filePath = "data/20_news/sci.BusinessandFinance/14147"
val lines = sc.textFile(filePath)
val sentence = lines.filter(line => line.contains(term))
}
It may be though that the lines need extra splitting (e.g. on full stops) to get the sentences.
You could add this step in like so:
topic.foreach { case (term, weight) =>
val filePath = "data/20_news/sci.BusinessandFinance/14147"
val lines = sc.textFile(filePath)
val morelines = lines.flatMap(l => l.split(". "))
val sentence = morelines.filter(line => line.contains(term))
}
val rddOnline = sc.textFile("/path/to/file")
val hasLine = rddOnline.map(line => line.contains("whatever it is"))
it will return true or false

Extracting data from RDD in Scala/Spark

So I have a large dataset that is a sample of a stackoverflow userbase. One line from this dataset is as follows:
<row Id="42" Reputation="11849" CreationDate="2008-08-01T13:00:11.640" DisplayName="Coincoin" LastAccessDate="2014-01-18T20:32:32.443" WebsiteUrl="" Location="Montreal, Canada" AboutMe="A guy with the attention span of a dead goldfish who has been having a blast in the industry for more than 10 years.
Mostly specialized in game and graphics programming, from custom software 3D renderers to accelerated hardware pipeline programming." Views="648" UpVotes="337" DownVotes="40" Age="35" AccountId="33" />
I would like to extract the number from reputation, in this case it is "11849" and the number from age, in this example it is "35" I would like to have them as floats.
The file is located in a HDFS so it comes in the format RDD
val linesWithAge = lines.filter(line => line.contains("Age=")) //This is filtering data which doesnt have age
val repSplit = linesWithAge.flatMap(line => line.split("\"")) //Here I am trying to split the data where there is a "
so when I split it with quotation marks the reputation is in index 3 and age in index 23 but how do I assign these to a map or a variable so I can use them as floats.
Also I need it to do this for every line on the RDD.
EDIT:
val linesWithAge = lines.filter(line => line.contains("Age=")) //transformations from the original input data
val repSplit = linesWithAge.flatMap(line => line.split("\""))
val withIndex = repSplit.zipWithIndex
val indexKey = withIndex.map{case (k,v) => (v,k)}
val b = indexKey.lookup(3)
println(b)
So if added an index to the array and now I've successfully managed to assign it to a variable but I can only do it to one item in the RDD, does anyone know how I could do it to all items?
What we want to do is to transform each element in the original dataset (represented as an RDD) into a tuple containing (Reputation, Age) as numeric values.
One possible approach is to transform each element of the RDD using String operations in order to extract the values of the elements "Age" and "Reputation", like this:
// define a function to extract the value of an element, given the name
def findElement(src: Array[String], name:String):Option[String] = {
for {
entry <- src.find(_.startsWith(name))
value <- entry.split("\"").lift(1)
} yield value
}
We then use that function to extract the interesting values from every record:
val reputationByAge = lines.flatMap{line =>
val elements = line.split(" ")
for {
age <- findElement(elements, "Age")
rep <- findElement(elements, "Reputation")
} yield (rep.toInt, age.toInt)
}
Note how we don't need to filter on "Age" before doing this. If we process a record that does not have "Age" or "Reputation", findElement will return None. Henceforth the result of the for-comprehension will be None and the record will be flattened by the flatMap operation.
A better way to approach this problem is by realizing that we are dealing with structured XML data. Scala provides built-in support for XML, so we can do this:
import scala.xml.XML
import scala.xml.XML._
// help function to map Strings to Option where empty strings become None
def emptyStrToNone(str:String):Option[String] = if (str.isEmpty) None else Some(str)
val xmlReputationByAge = lines.flatMap{line =>
val record = XML.loadString(line)
for {
rep <- emptyStrToNone((record \ "#Reputation").text)
age <- emptyStrToNone((record \ "#Age").text)
} yield (rep.toInt, age.toInt)
}
This method relies on the structure of the XML record to extract the right attributes. As before, we use the combination of Option values and flatMap to remove records that do not contain all the information we require.
First, you need a function which extracts the value for a given key of your line (getValueForKeyAs[T]), then do:
val rdd = linesWithAge.map(line => (getValueForKeyAs[Float](line,"Age"), getValueForKeyAs[Float](line,"Reputation")))
This should give you an rdd of type RDD[(Float,Float)]
getValueForKeyAs could be implemented like this:
def getValueForKeyAs[A](line:String, key:String) : A = {
val res = line.split(key+"=")
if(res.size==1) throw new RuntimeException(s"no value for key $key")
val value = res(1).split("\"")(1)
return value.asInstanceOf[A]
}

Scala, finding max value in arrays

First time I've had to ask a question here, there is not enough info on Scala out there for a newbie like me.
Basically what I have is a file filled with hundreds of thousands of lists formatted like this:
(type, date, count, object)
Rows look something like this:
(food, 30052014, 400, banana)
(food, 30052014, 2, pizza)
All I need to is find the one row with the highest count.
I know I did this a couple of months ago but can't seem to wrap my head around it now. I'm sure I can do this without a function too. All I want to do is set a value and put that row in it but I can't figure it out.
I think basically what I want to do is a Math.max on the 3rd element in the lists, but I just can't get it.
Any help will be kindly appreciated. Sorry if my wording or formatting of this question isn't the best.
EDIT: There's some extra info I've left out that I should probably add:
All the records are stored in a tsv file. I've done this to split them:
val split_food = food.map(_.split("/t"))
so basically I think I need to use split_food... somehow
Modified version of #Szymon answer with your edit addressed:
val split_food = food.map(_.split("/t"))
val max_food = split_food.maxBy(tokens => tokens(2).toInt)
or, analogously:
val max_food = split_food.maxBy { case Array(_, _, count, _) => count.toInt }
In case you're using apache spark's RDD, which has limited number of usual scala collections methods, you have to go with reduce
val max_food = split_food.reduce { (max: Array[String], current: Array[String]) =>
val curCount = current(2).toInt
val maxCount = max(2).toInt // you probably would want to preprocess all items,
// so .toInt will not be called again and again
if (curCount > maxCount) current else max
}
You should use maxBy function:
case class Purchase(category: String, date: Long, count: Int, name: String)
object Purchase {
def apply(s: String) = s.split("\t") match {
case Seq(cat, date, count, name) => Purchase(cat, date.toLong, count.toInt, name)
}
}
foodRows.map(row => Purchase(row)).maxBy(_.count)
Simply:
case class Record(food:String, date:String, count:Int)
val l = List(Record("ciccio", "x", 1), Record("buffo", "y", 4), Record("banana", "z", 3))
l.maxBy(_.count)
>>> res8: Record = Record(buffo,y,4)
Not sure if you got the answer yet but I had the same issues with maxBy. I found once I ran the package... import scala.io.Source I was able to use maxBy and it worked.