Scala - Get files based on name pattern - scala

I would like to filter the files based on some patterns like :
- Team_*.txt (e.g.: Team_Orlando.txt);
- Name.*.City.txt (e.g.: Name.Robert.California.txt);
Or any name (the pattern * . * - it has spaces because was broken my text).
All the filters come from a database table and they are dynamic.
I'm trying to avoid use commands from SO like cp or mv. Is possible to filter files using patterns like the above ?
Here is what i've tried but got a regex error:
def getFiles(dir:File, filter:String) = {
(dir.isDirectory, dir.exists) match {
case (true, true) =>
dir.listFiles.filter(f => f.getName.matches(filter))
case _ =>
Array[File]()
}
}

You can use java.nio Files.newDirectoryStream() for that, it will accept pattern in desired format:
val stream = Files.newDirectoryStream(dir, pattern)
Check http://docs.oracle.com/javase/tutorial/essential/io/dirs.html#glob for detailed description.

Related

Scala how to use regex on endsWith?

I'm trying to figure out how to isolate all file extensions from a list of file names using regex and endsWith.
So as an example
input:
file.txt, notepad.exe
output:
txt, exe
What my idea is, is to use filter to get file names that endsWith("."_). But endsWith("."_) doesn't work.
Any suggestions?
You really do not want to filter, you want to map each filename into its extension.
(and maybe then collect only the ones that had an extension and probably you only want each unique extension)
You can use a regex for that.
object ExtExtractor {
val ExtRegex = """.*\.(\w+)?""".r
def apply(data: List[String]): Set[String] =
data.iterator.collect {
case ExtRegex(ext) => ext.toLowerCase
}.toSet
}
You can see it running here.
how about using split('.') which will return a
String[] parts = fileName.split("\\.");
String extension = parts[parts.length-1];

Scala: mutable.Map[Any, mutable.Map[Any, IndexedSeq[Any]] issue

I have a tiny problem when trying to put an item in my second mutable map.
My objective: gather some elements located in many different xml files, and organise them in the hierarchy they belong (the files are an unstructured mess, with categories being given in seemingly no logical order).
Those elements are: the level in hierarchy of the category (1 - x, 1 is top level) as iLevel, the category code as catCode, its name, and if need be, the name of its parents (all names located in namesCategories).
val categoryMap = mutable.Map.empty[Int, mutable.Map[String, IndexedSeq[String]]]
...
//Before: search in a first file links to other files
// for each category file found, will treat it and store it for further treatement.
matches.foreach{f =>
....
//Will search for a specific regex, and for each matches store what we are interested in
matchesCat.foreach{t =>
sCat = t.replaceFirst((system_env + """\S{4}"""), "")
//iLevel given by the number of '/' remaining in the string
iLevel = sCat.count(_ == '/')
//reset catCode and namesCategories
catCode = ""
namesCategories.clear()
//Search and extract the datas from sCat using premade regex patterns
sCat match {
case patternCatCode(codeCat) => catCode = s"$codeCat"
}
//remove the category code to prepare to extract names
sCat.replace(patternCatCode.toString(), "")
//extract names
do {
sCat match {
case patternCatNames(name) => namesCategories += s"$name"
}
sCat.replace(patternCatNames.toString(), "")
}while(sCat!="")
// create the level entry if it doesn't exist
if(!(categoryMap.contains(iLevel))) {
categoryMap.put(iLevel, mutable.Map.empty[String, IndexedSeq[String]])
}
//Try to add my cat code and the names, which must be in order for further treatment, to my map
categoryMap(iLevel).put(catCode, namesCategories.clone())
}
}
}
Problem:
Type mismatch, expected: IndexedSeq[String], actual: mutable.Builder[String, IndexedSeq[String]]
As Travis Brown kindly noted, I have an issue with a type mismatch, but I don't know how to fix that and get the general idea working.
I tried to keep the code to only what is relevant here, if more is needed I'll edit again.
Any tips?
Thanks for your help

Entity Framework: combining exact and wildcard searching conditional on search term

I'm creating a query to search the db using EF. TdsDb being the EF context.
string searchValue = "Widget";
TdsDb tdsDb = new TdsDb();
IQueryable<Counterparty> counterparties;
I can do exact match:
counterparties = tdsDb.Counterparties.Where(x => x.CounterpartyName == searchValue);
or wildcard match:
counterparties = tdsDb.Counterparties.Where(x => x.CounterpartyName.Contains(searchValue));
But I want to be able to do both i.e. (psudo code)
counterparties = tdsDb.Counterparties.Where(x =>
if (searchValue.EndsWith("%"))
{
if (searchValue.StartsWith("%"))
{x.CounterpartyName.Contains(searchValue)}
else
{x.CounterpartyName.StartsWith(searchValue)}
}
else
{x => x.CounterpartyName == searchValue}
);
Now clearly I can't put an if statement in the where clause like that. But I also can't duplicate the queries: shown here they are hugely dumbed down. The production query is far longer, so having multiple versions of the same long query that vary on only one clause seems very unhealthy and unmaintainable.
Any ideas?
You should be able to use the ternary operator:
bool startsWithWildCard = searchValue.StartsWith("%");
bool endsWithWildCard = searchValue.EndsWith("%");
counterparties = tdsDb.Counterparties.Where(x =>
endsWithWildCard
? (startsWithWildCard
? x.CounterpartyName.Contains(searchValue)
: (x.CounterpartyName.StartsWith(searchValue)))
: (x.CounterpartyName == searchValue));
Did you test btw if querying by a searchValue that has an % at the beginning or end works as you expect? It might be possible that % will be escaped as a character to query for because StartsWith and Contains will prepend/append % wildcards to the generated SQL search term anyway. In that case you need to cut off the % from the searchValue before you pass it into StartsWith or Contains.

Preferred way of processing this data with parallel arrays

Imagine a sequence of java.io.File objects. The sequence is not in any particular order, it gets populated after a directory traversal. The names of the files can be like this:
/some/file.bin
/some/other_file_x1.bin
/some/other_file_x2.bin
/some/other_file_x3.bin
/some/other_file_x4.bin
/some/other_file_x5.bin
...
/some/x_file_part1.bin
/some/x_file_part2.bin
/some/x_file_part3.bin
/some/x_file_part4.bin
/some/x_file_part5.bin
...
/some/x_file_part10.bin
Basically, I can have 3 types of files. First type is the simple ones, which only have a .bin extension. The second type of file is the one formed from _x1.bin till _x5.bin. And the third type of file can be formed of 10 smaller parts, from _part1 till _part10.
I know the naming may be strange, but this is what I have to work with :)
I want to group the files together ( all the pieces of a file should be processed together ), and I was thinking of using parallel arrays to do this. The thing I'm not sure about is how can I perform the reduce/acumulation part, since all the threads will be working on the same array.
val allBinFiles = allBins.toArray // array of java.io.File
I was thinking of handling something like that:
val mapAcumulator = java.util.Collections.synchronizedMap[String,ListBuffer[File]](new java.util.HashMap[String,ListBuffer[File]]())
allBinFiles.par.foreach { file =>
file match {
// for something like /some/x_file_x4.bin nameTillPart will be /some/x_file
case ComposedOf5Name(nameTillPart) => {
mapAcumulator.getOrElseUpdate(nameTillPart,new ListBuffer[File]()) += file
}
case ComposedOf10Name(nameTillPart) => {
mapAcumulator.getOrElseUpdate(nameTillPart,new ListBuffer[File]()) += file
}
// simple file, without any pieces
case _ => {
mapAcumulator.getOrElseUpdate(file.toString,new ListBuffer[File]()) += file
}
}
}
I was thinking of doing it like I've shown in the above code. Having extractors for the files, and using part of the path as key in the map. Like for example, /some/x_file can hold as values /some/x_file_x1.bin to /some/x_file_x5.bin. I also think there could be a better way of handling this. I would be interested in your opinions.
The alternative is to use groupBy:
val mp = allBinFiles.par.groupBy {
case ComposedOf5Name(x) => x
case ComposedOf10Name(x) => x
case f => f.toString
}
This will return a parallel map of parallel arrays of files (ParMap[String, ParArray[File]]). If you want a sequential map of sequential sequences of files from this point:
val sqmp = mp.map(_.seq).seq
To ensure that the parallelism kicks in, make sure you have enough elements in you parallel array (10k+).

How to further improve error messages in Scala parser-combinator based parsers?

I've coded a parser based on Scala parser combinators:
class SxmlParser extends RegexParsers with ImplicitConversions with PackratParsers {
[...]
lazy val document: PackratParser[AstNodeDocument] =
((procinst | element | comment | cdata | whitespace | text)*) ^^ {
AstNodeDocument(_)
}
[...]
}
object SxmlParser {
def parse(text: String): AstNodeDocument = {
var ast = AstNodeDocument()
val parser = new SxmlParser()
val result = parser.parseAll(parser.document, new CharArrayReader(text.toArray))
result match {
case parser.Success(x, _) => ast = x
case parser.NoSuccess(err, next) => {
tool.die("failed to parse SXML input " +
"(line " + next.pos.line + ", column " + next.pos.column + "):\n" +
err + "\n" +
next.pos.longString)
}
}
ast
}
}
Usually the resulting parsing error messages are rather nice. But sometimes it becomes just
sxml: ERROR: failed to parse SXML input (line 32, column 1):
`"' expected but `' found
^
This happens if a quote characters is not closed and the parser reaches the EOT. What I would like to see here is (1) what production the parser was in when it expected the '"' (I've multiple ones) and (2) where in the input this production started parsing (which is an indicator where the opening quote is in the input). Does anybody know how I can improve the error messages and include more information about the actual internal parsing state when the error happens (perhaps something like a production rule stacktrace or whatever can be given reasonably here to better identify the error location). BTW, the above "line 32, column 1" is actually the EOT position and hence of no use here, of course.
I don't know yet how to deal with (1), but I was also looking for (2) when I found this webpage:
https://wiki.scala-lang.org/plugins/viewsource/viewpagesrc.action?pageId=917624
I'm just copying the information:
A useful enhancement is to record the input position (line number and column number) of the significant tokens. To do this, you must do three things:
Make each output type extend scala.util.parsing.input.Positional
invoke the Parsers.positioned() combinator
Use a text source that records line and column positions
and
Finally, ensure that the source tracks positions. For streams, you can simply use scala.util.parsing.input.StreamReader; for Strings, use scala.util.parsing.input.CharArrayReader.
I'm currently playing with it so I'll try to add a simple example later
In such cases you may use err, failure and ~! with production rules designed specifically to match the error.