Scala: Convert a string to string array with and without split given that all special characters except "(" an ")" are allowed - scala

I have an array
val a = "((x1,x2),(y1,y2),(z1,z2))"
I want to parse this into a scala array
val arr = Array(("x1","x2"),("y1","y2"),("z1","z2"))
Is there a way of directly doing this with an expr() equivalent ?
If not how would one do this using split
Note : x1 x2 x3 etc are strings and can contain special characters so key would be to use () delimiters to parse data -
Code I munged from Dici and Bogdan Vakulenko
val x2 = a.getString(1).trim.split("[\()]").grouped(2).map(x=>x(0).trim).toArray
val x3 = x2.drop(1) // first grouping is always null dont know why
var jmap = new java.util.HashMap[String, String]()
for (i<-x3)
{
val index = i.lastIndexOf(",")
val fv = i.slice(0,index)
val lv = i.substring(index+1).trim
jmap.put(fv,lv)
}
This is still suceptible to "," in the second string -

Actually, I think regex are the most convenient way to solve this.
val a = "((x1,x2),(y1,y2),(z1,z2))"
val regex = "(\\((\\w+),(\\w+)\\))".r
println(
regex.findAllMatchIn(a)
.map(matcher => (matcher.group(2), matcher.group(3)))
.toList
)
Note that I made some assumptions about the format:
no whitespaces in the string (the regex could easily be updated to fix this if needed)
always tuples of two elements, never more
empty string not valid as a tuple element
only alphanumeric characters allowed (this also would be easy to fix)

val a = "((x1,x2),(y1,y2),(z1,z2))"
a.replaceAll("[\\(\\) ]","")
.split(",")
.sliding(2)
.map(x=>(x(0),x(1)))
.toArray

Related

replace multiple occurrence of duplicate string in Scala with empty

I have a string as
something,'' something,nothing_something,op nothing_something,'' cat,cat
I want to achieve my output as
'' something,op nothing_something,cat
Is there any way to achieve it?
If I understand your requirement correctly, here's one approach with the following steps:
Split the input string by "," and create a list of indexed-CSVs and convert it to a Map
Generate 2-combinations of the indexed-CSVs
Check each of the indexed-CSV pairs and capture the index of any CSV which is contained within the other CSV
Since the CSVs corresponding to the captured indexes are contained within some other CSV, removing these indexes will result in remaining indexes we would like to keep
Use the remaining indexes to look up CSVs from the CSV Map and concatenate them back to a string
Here is sample code applying to a string with slightly more general comma-separated values:
val str = "cats,a cat,cat,there is a cat,my cat,cats,cat"
val csvIdxList = (Stream from 1).zip(str.split(",")).toList
val csvMap = csvIdxList.toMap
val csvPairs = csvIdxList.combinations(2).toList
val csvContainedIdx = csvPairs.collect{
case List(x, y) if x._2.contains(y._2) => y._1
case List(x, y) if y._2.contains(x._2) => x._1
}.
distinct
// csvContainedIdx: List[Int] = List(3, 6, 7, 2)
val csvToKeepIdx = (1 to csvIdxList.size) diff csvContainedIdx
// csvToKeepIdx: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 4, 5)
val strDeduped = csvToKeepIdx.map( csvMap.getOrElse(_, "") ).mkString(",")
// strDeduped: String = cats,there is a cat,my cat
Applying the above to your sample string something,'' something,nothing_something,op nothing_something would yield the expected result:
strDeduped: String = '' something,op nothing_something
First create an Array of words separated by commas using split command on the given String, and do other operations using filter and mkString as below:
s.split(",").filter(_.contains(' ')).mkString(",")
In Scala REPL:
scala> val s = "something,'' something,nothing_something,op nothing_something"
s: String = something,'' something,nothing_something,op nothing_something
scala> s.split(",").filter(_.contains(' ')).mkString(",")
res27: String = '' something,op nothing_something
As per Leo C comment, I tested it as below with some other String:
scala> val s = "something,'' something anything anything anything anything,nothing_something,op op op nothing_something"
s: String = something,'' something anything anything anything anything,nothing_something,op op op nothing_something
scala> s.split(",").filter(_.contains(' ')).mkString(",")
res43: String = '' something anything anything anything anything,op op op nothing_something

Is there a better way of converting Iterator[char] to Seq[String]?

Following is my code that I have used to convert Iterator[char] to Seq[String].
val result = IOUtils.toByteArray(new FileInputStream (new File(fileDir)))
val remove_comp = result.grouped(11).map{arr => arr.update(2, 32);arr}.flatMap{arr => arr.update(3, 32); arr}
val convert_iter = remove_comp.map(_.toChar.toString).toSeq.mkString.split("\n")
val rdd_input = Spark.sparkSession.sparkContext.parallelize(convert_iter)
val fileDir:
12**34567890
12##34567890
12!!34567890
12¬¬34567890
12
'34567890
I am not happy with this code as the data size is big and converting to string would end up with heap space.
val convert_iter = remove_comp.map(_.toChar)
convert_iter: Iterator[Char] = non-empty iterator
Is there a better way of coding?
By completely disregarding corner cases about empty Strings etc I would start with something like:
val test = Iterable('s','f','\n','s','d','\n','s','v','y')
val (allButOne, last) = test.foldLeft( (Seq.empty[String], Seq.empty[Char]) ) {
case ((strings, chars), char) =>
if (char == '\n')
(strings :+ chars.mkString, Seq.empty)
else
(strings, chars :+ char)
}
val result = allButOne :+ last.mkString
I am sure it could be made more elegant, and handle corner cases better (once you define you want them handled), but I think it is a nice starting point.
But to be honest I am not entirely sure what you want to achieve. I just guessed that you want to group chars divided by \n together and turn them into Strings.
Looking at your code, I see that you are trying to replace the special characters such as **, ## and so on from the file that contains following data
12**34567890
12##34567890
12!!34567890
12¬¬34567890
12
'34567890
For that you can just read the data using sparkContext textFile and use regex replaceAllIn
val pattern = new Regex("[¬~!##$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-]")
val result = sc.textFile(fileDir).map(line => pattern.replaceAllIn(line, ""))
and you should have you result as RDD[String] which also an iterator
1234567890
1234567890
1234567890
1234567890
12
34567890
Updated
If there are \n and \r in between the texts at 3rd and 4th place and if the result is all fixed length of 10 digits text then you can use wholeTextFiles api of sparkContext and use following regex as
val pattern = new Regex("[¬~!##$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-\r\n]")
val result = sc.wholeTextFiles(fileDir).flatMap(line => pattern.replaceAllIn(line._2, "").grouped(10))
You should get the output as
1234567890
1234567890
1234567890
1234567890
1234567890
I hope the answer is helpful

Scala Regex with $ and String Interpolation

I am writing a regex in scala
val regex = "^foo.*$".r
this is great but if I want to do
var x = "foo"
val regex = s"""^$x.*$""".r
now we have a problem because $ is ambiguous. is it possible to have string interpolation and be able to write a regex as well?
I can do something like
val x = "foo"
val regex = ("^" + x + ".*$").r
but I don't like to do a +
You can use $$ to have a literal $ in an interpolated string.
You should use the raw interpolator when enclosing a string in triple-quotes as the s interpolator will re-enable escape sequences that you might expect to be interpreted literally in triple-quotes. It doesn't make a difference in your specific case but it's good to keep in mind.
so val regex = raw"""^$x.*$$""".r
Using %s should work.
var x = "foo"
val regex = """^%s.*$""".format(x).r
In the off case you need %s to be a regex match term, just do
val regex = """^%s.*%s$""".format(x, "%s").r

Removing values after particular character from rdd in scala

I have following input:
(A,123#3A,B,C,D,134#wer,E,242#wer)
Is there a way to to get following output using filter/replace/trim or any other function.
(A,123,B,C,D,134,E,242)
Your question is not completely clear.
If you mean that your input is a list of strings then you can do:
val input = Seq("A","123#3A","B","C","D","134#wer","E","242#wer")
input.map(_.split("#").head)
but if you mean that your input it one string then you can do:
val input2 = "(A,123#3A,B,C,D,134#wer,E,242#wer)"
val Pattern = "\\(([a-zA-Z\\d,#]*)\\)".r
input2 match {
case Pattern(str) => "(" + str.split(",").map(_.split("#").head).mkString(",") + ")"
}

How to split and remove empty spaces before get the result

I have this String:
val str = "9617 / 20634"
So after split i want to parse only the 2 values 9617 & 20634 in order to calculate percentage.
So instead of trim after the split can i do it before ?
It is easier to remove spaces after the split than before. Unless you are expected to trim it before, here is a simpler way of doing it.
val Array(x, y) = "9617 / 20634".split("/").map(_.trim.toFloat)
val p = x / y * 100
Values are converted to Float to prevent integer division leading to 0.
The val Array(x,y) = ... statement is just another way of calling
unapplySeq of the Array object.
For scalable solution one might use regexps:
scala> val r = """(\d+) */ *(\d+)""".r
r: scala.util.matching.Regex = (\d+) +/ +(\d+)
scala> "111 / 222" match {
case r(a, b) => println(s"Got $a, $b")
case _ =>
}
Got 111, 222
This is useful if you need different usage patterns.
Here you define correspondence between variables from you input (111, 222) and match groups ((\d+), (\d+)).