scala boolean variables need to change dynamicly - scala

I have this method that I need to evaluate an expression based on these Boolean values:
var Ma, Co, Ar, Le, St, Ri: Boolean = false
def Logic: Boolean =
(Ma & (!Ma | Co) & (!Ma | Ar) & (!Co | Ma) &
(!Ar | Ma) & (!Ar | Le | St | Ri))
In main() method I have:
def main(args: Array[String]) {
val ls: List[String] = List("Ma", "Co", "Ar")
//now I need to change the value of Ma, Co and Ar variables to "true"??
}
Is there a general way that may help to change the value of only these Boolean variables to true that are found in this list?

You can use an Enumeration, and use its ValueSet to store your true values instead of individual vars. This lets you refer to them by String name:
object MyBoolValue extends Enumeration {
type MyBoolValue = Value
val Ma, Co, Ar, Le, St, Ri = Value
}
class MyBoolValueContext {
var trueValues = MyBoolValue.ValueSet.empty
def isTrue(v: MyBoolValue) = trueValues contains v
def apply(v: MyBoolValue) = isTrue(v)
}
So then you can do:
import MyBoolValue._
val c = new MyBoolValueContext
c(Ma)
c.trueValues += Le
c.trueValues -= St
def logic: Boolean = (c(Ma) &
(!c(Ma) | c(Co)) &
(!c(Ma) | c(Ar)) &
(!c(Co) | c(Ma)) &
(!c(Ar) | c(Ma)) &
(!c(Ar) | c(Le) | c(St) | c(Ri)))
And you can use withName to handle String input:
c.trueValues ++= List("Ar", "Le", "St").map(MyBoolValue.withName)
You can get a little fancier by making the context implicit:
implicit case class ResolveMyBoolValue(self: MyBoolValue) extends AnyVal {
def get(implicit context: MyBoolValueContext): Boolean = context.isTrue(self)
}
implicit val context = new MyBoolValueContext
val result = Ar.get | !St.get
Or use an implicit conversion, though this could cause some confusion if misused:
implicit def resolveMyBoolValue(v: MyBoolValue)
(implicit context: MyBoolValueContext) = {
context.isTrue(v)
}
val result: Boolean = Le

Not without reflection, I think. But if you keep the name->value mapping in a map, you can do something like this:
val v = scala.collection.mutable.Map[String, Boolean]("Ma" -> false, "Co"-> false, "Ar" -> false, "Le" -> false, "St" -> false, "Ri" -> false)
//> v : scala.collection.mutable.Map[String,Boolean] = Map(Ar -> false, Le -> f
//| alse, Co -> false, Ma -> false, St -> false, Ri -> false)
def Logic:Boolean = (v("Ma") & (!v("Ma") | v("Co")) & (!v("Ma") | v("Ar")) & (!v("Co") | v("Ma")) &
(!v("Ar") | v("Ma")) & (!v("Ar") | v("Le") | v("St") | v("Ri")))
//> Logic: => Boolean
val ls: List[String] = List("Ma", "Co", "Ar") //> ls : List[String] = List(Ma, Co, Ar)
v("Ma") //> res0: Boolean = false
ls.foreach(v(_) = true)
v("Ma") //> res1: Boolean = true
Logic //> res2: Boolean = false

One of the ways to do it:
def Logic(params:List[String]) = {
val p = params.map(par => (par, true)).toMap.withDefaultValue(false)
(
p("Ma") & (
!p("Ma") | p("Co")
) & (
!p("Ma") | p("Ar")
) & (
!p("Co") | p("Ma")
) & (
!p("Ar") | p("Ma")
) & (
!p("Ar") | p("Le") | p("St") | p("Ri")
)
)
}
scala> Logic(List("Ma","Co","Ar"))
res0: Boolean = false
scala> Logic(List("Ma","Co","Ar","Le"))
res1: Boolean = true

Related

how to order .jpg file with ascending order datetime of image was when it was captured in scala

The image was saved with format photo name.extension, cityname, yyyy-mm-dd hh:mm:ss
i am trying to write function in scala which give desired result.
for eg:
john.jpg, USA,2013-09-15 14:08:15
BOB.jpg, UK,2013-09-15 14:08:15
RONY.jpg, USA,2013-09-15 19:08:15
A.PNG, USA,2018-09-15 21:08:15
TONY.jpg, CHINA,2020-09-15 19:08:15
MONY.PNG, CHINA,2021-09-15 21:08:15
RONY.jpg, CHINA,2015-09-15 19:08:15
A.PNG, JAPAN,2019-09-15 21:08:15
EXPECTED OUTPUT:
USA01.JPG
UK01.JPG
USA02.JPG
USA03.PNG
CHINA01.JPG
CHINA02.PNG
CHINA03.JPG
JAPAN01.PNG
There 3 pic from USA, so usa01, usa02 and usa03.
similarly china01,china02 and china03
appreciate your suggestion or approach.
Thanks
I broke it down into steps to make it clearer:
scala> val images = List(
| "john.jpg, USA,2013-09-15 14:08:15",
| "BOB.jpg, UK,2013-09-15 14:08:15",
| "RONY.jpg, USA,2013-09-15 19:08:15",
| "A.PNG, USA,2018-09-15 21:08:15",
| "TONY.jpg, CHINA,2020-09-15 19:08:15",
| "MONY.PNG, CHINA,2021-09-15 21:08:15",
| "RONY.jpg, CHINA,2015-09-15 19:08:15",
| "A.PNG, JAPAN,2019-09-15 21:08:15"
| )
val images: List[String] = List(john.jpg, USA,2013-09-15 14:08:15, BOB.jpg, UK,2013-09-15 14:08:15, RONY.jpg, USA,2013-09-15 19:08:15, A.PNG, USA,2018-09-15 21:08:15, TONY.jpg, CHINA,2020-09-15 19:08:15, MONY.PNG, CHINA,2021-09-15 21:08:15, RONY.jpg, CHINA,2015-09-15 19:08:15, A.PNG, JAPAN,2019-09-15 21:08:15)
scala> val props = images.map(_.split(",").map(_.trim)).filter(_.size == 3).map{case Array(x, y, z) => (x, y, z)}
val props: List[(String, String, String)] = List((john.jpg,USA,2013-09-15 14:08:15), (BOB.jpg,UK,2013-09-15 14:08:15), (RONY.jpg,USA,2013-09-15 19:08:15), (A.PNG,USA,2018-09-15 21:08:15), (TONY.jpg,CHINA,2020-09-15 19:08:15), (MONY.PNG,CHINA,2021-09-15 21:08:15), (RONY.jpg,CHINA,2015-09-15 19:08:15), (A.PNG,JAPAN,2019-09-15 21:08:15))
scala> val sortedProps = props.sortBy(_._3)
val sortedProps: List[(String, String, String)] = List((john.jpg,USA,2013-09-15 14:08:15), (BOB.jpg,UK,2013-09-15 14:08:15), (RONY.jpg,USA,2013-09-15 19:08:15), (RONY.jpg,CHINA,2015-09-15 19:08:15), (A.PNG,USA,2018-09-15 21:08:15), (A.PNG,JAPAN,2019-09-15 21:08:15), (TONY.jpg,CHINA,2020-09-15 19:08:15), (MONY.PNG,CHINA,2021-09-15 21:08:15))
scala> val relevantProps = sortedProps.map{ case (fname, cntry, date) => (fname.split("\\.")(1).toUpperCase, cntry) }
val relevantProps: List[(String, String)] = List((JPG,USA), (JPG,UK), (JPG,USA), (JPG,CHINA), (PNG,USA), (PNG,JAPAN), (JPG,CHINA), (PNG,CHINA))
scala> val (files, counts) = relevantProps.foldLeft((List[String](), Map[String, Int]())) { case ((res, counts), (ext, cntry)) =>
| val count = counts.getOrElse(cntry, 0) + 1
| ((s"$cntry$count.$ext") :: res, counts.updated(cntry, count))
| }
val files: List[String] = List(CHINA3.PNG, CHINA2.JPG, JAPAN1.PNG, USA3.PNG, CHINA1.JPG, USA2.JPG, UK1.JPG, USA1.JPG)
val counts: scala.collection.immutable.Map[String,Int] = Map(USA -> 3, UK -> 1, CHINA -> 3, JAPAN -> 1)
scala> val result = files.reverse
val result: List[String] = List(USA1.JPG, UK1.JPG, USA2.JPG, CHINA1.JPG, USA3.PNG, JAPAN1.PNG, CHINA2.JPG, CHINA3.PNG)
Or one-liner just for fun:
List(
"john.jpg, USA,2013-09-15 14:08:15",
"BOB.jpg, UK,2013-09-15 14:08:15",
"RONY.jpg, USA,2013-09-15 19:08:15",
"A.PNG, USA,2018-09-15 21:08:15",
"TONY.jpg, CHINA,2020-09-15 19:08:15",
"MONY.PNG, CHINA,2021-09-15 21:08:15",
"RONY.jpg, CHINA,2015-09-15 19:08:15",
"A.PNG, JAPAN,2019-09-15 21:08:15"
).map(_.split(",").map(_.trim))
.filter(_.size == 3).map{case Array(x, y, z) => (x, y, z)}
.sortBy(_._3)
.map{ case (fname, cntry, date) => (fname.split("\\.")(1).toUpperCase, cntry) }
.foldLeft((List[String](), Map[String, Int]())) { case ((res, counts), (ext, cntry)) =>
val count = counts.getOrElse(cntry, 0) + 1
((s"$cntry$count.$ext") :: res, counts.updated(cntry, count))
}._1.reverse
Output:
val res0: List[String] = List(USA1.JPG, UK1.JPG, USA2.JPG, CHINA1.JPG, USA3.PNG, JAPAN1.PNG, CHINA2.JPG, CHINA3.PNG)

Scala slick join table with grouped query

I have two tables:
Shop
class ShopTable(tag: Tag) extends GenericTableShop, UUID {
def * = (id.?, name, address) <> (Shop.tupled, Shop.unapply _)
def name = column[String]("name")
def address = column[String]("address")
}
val shopTable = TableQuery[ShopDAO.ShopTable]
Order
class OrderTable(tag: Tag) extends GenericTableOrder, UUID {
def * = (id.?, shopId, amount) <> (Order.tupled, Order.unapply _)
def shopId = column[UUID]("shop_id")
def amount = column[Double]("amount")
}
val orderTable = TableQuery[OrderDAO.OrderTable]
I can get statictic (orders count, orders amount sum) for shops:
def getStatisticForShops(shopIds: List[UUID]): Future[Seq(UUID, Int, Double)] = {
searchStatisticsByShopIds(shopIds).map(orderStatistics =>
shopIds.map(shopId => {
val o = orderStatistics.find(_._1 == shopId)
(
shopId,
o.map(_._2).getOrElse(0),
o.map(_._3).getOrElse(0.0)
)
})
)
}
def searchStatisticsByShopIds(shopIds: List[UUID]): Future[Seq(UUID, Int, Double)] =
db.run(searchStatisticsByShopIdsCompiled(shopIds).result)
private val searchStatisticsByShopIdsCompiled = Compiled((shopIds: Rep[List[(UUID)]]) =>
orderTable.filter(_.shopId === shopIds.any)
.groupBy(_.shopId)
.map { case (shopId, row) =>
(shopId, row.length, row.map(_.amount).sum.get)
}
)
By I need sorting and filtering shopTable by orders count.
How I can join grouped orderTable to shopTable with zero values for missing shops?
I want have such a request:
| id(shopId) | name | address | ordersCount | ordersAmount |
| id1 | name | address | 4 | 200.0 |
| id2 | name | address | 0 | 0.0 |
| id3 | name | address | 2 | 300.0 |
I use scala 2.12.6, slick 2.12:3.0.1, play-slick 3.0.1, slick-pg 0.16.3
P.S. I may have found a solution
val shopsOrdersQuery: Query[(Rep[UUID], Rep[Int], Rep[Double]), (UUID, Int, Double), Seq] = searchShopsOrdersCompiled.extract
// Query shops with orders for sorting and filtering
val allShopsOrdersQueryQuery = shopTable.joinLeft(shopsOrdersQuery).on(_.id === _._1)
.map(s => (s._1, s._2.map(_._2).getOrElse(0), s._2.map(_._3).getOrElse(0.0)))
private val searchShopsOrdersCompiled = Compiled(
orderTable.groupBy(_.shopId)
.map { case (shopId, row) =>
(shopId, row.length, row.map(_.amount).sum.get)
}
)
Yes, this solution is work fine
val shopsOrdersQuery: Query[(Rep[UUID], Rep[Int], Rep[Double]), (UUID, Int, Double), Seq] = searchShopsOrdersCompiled.extract
// Query shops with orders for sorting and filtering
val allShopsOrdersQueryQuery = shopTable.joinLeft(shopsOrdersQuery).on(_.id === _._1)
.map(s => (s._1, s._2.map(_._2).getOrElse(0), s._2.map(_._3).getOrElse(0.0)))
private val searchShopsOrdersCompiled = Compiled(
orderTable.groupBy(_.shopId)
.map { case (shopId, row) =>
(shopId, row.length, row.map(_.amount).sum.get)
}
)

Scala transform and split String Column to MapType Column in Dataframe

I have a DataFrame with a column, containing tracking request urls with fields inside, that looks like this
df.show(truncate = false)
+--------------------------------
| request_uri
+-----------------------------------
| /i?aid=fptplay&ast=1582163970763&av=4.6.1&did=83295772a8fee349 ...
| /i?p=fplay-ottbox-2019&av=2.0.18&nt=wifi&ov=9&tv=1.0.0&tz=GMT%2B07%3A00 ...
| ...
I need to transform this column to something that looks like this
df.show(truncate = false)
+--------------------------------
| request_uri
+--------------------------------
| (aid -> fptplay, ast -> 1582163970763, tz -> [timezone datatype], nt -> wifi , ...)
| (p -> fplay-ottbox-2019, av -> 2.0.18, ov -> 9, tv -> 1.0.0 , ...)
| ...
Basically I have to split the field names (delimiter = "&" ) and their values into a MapType of some sort, and add that to the column.
Can someone give me pointers how to write a custom function to split the string column into a MapType column?
I'm told to use withColumn() and mapPartition but I don't know how to implement it in a way that will split the strings and cast them to MapType.
Any help even though minimal would be heartily appreciated. I'm completely new to Scala and have been stuck on this for a week.
The solution is to use UserDefinedFunctions.
Let's take the problem one step at a time.
// We need a function which converts strings into maps
// based on the format of request uris
def requestUriToMap(s: String): Map[String, String] = {
s.stripPrefix("/i?").split("&").map(elem => {
val pair = elem.split("=")
(pair(0), pair(1)) // evaluate each element to a tuple
}).toMap
}
// Now we convert this function into a UserDefinedFunction.
import org.apache.spark.sql.functions.{col, udf}
// Given to a request uri string, convert it to map, the correct format is assumed.
val requestUriToMapUdf = udf((requestUri: String) => requestUriToMap(requestUri))
Now, we test.
// Test data
val df = Seq(
("/i?aid=fptplay&ast=1582163970763&av=4.6.1&did=83295772a8fee349"),
("/i?p=fplay-ottbox-2019&av=2.0.18&nt=wifi&ov=9&tv=1.0.0&tz=GMT%2B07%3A00")
).toDF("request_uri")
df.show(false)
//+-----------------------------------------------------------------------+
//|request_uri |
//+-----------------------------------------------------------------------+
//|/i?aid=fptplay&ast=1582163970763&av=4.6.1&did=83295772a8fee349 |
//|/i?p=fplay-ottbox-2019&av=2.0.18&nt=wifi&ov=9&tv=1.0.0&tz=GMT%2B07%3A00|
//+-----------------------------------------------------------------------+
// Now we execute our UDF to create a column, using the same name replaces that column
val mappedDf = df.withColumn("request_uri", requestUriToMapUdf(col("request_uri")))
mappedDf.show(false)
//+---------------------------------------------------------------------------------------------+
//|request_uri |
//+---------------------------------------------------------------------------------------------+
//|[aid -> fptplay, ast -> 1582163970763, av -> 4.6.1, did -> 83295772a8fee349] |
//|[av -> 2.0.18, ov -> 9, tz -> GMT%2B07%3A00, tv -> 1.0.0, p -> fplay-ottbox-2019, nt -> wifi]|
//+---------------------------------------------------------------------------------------------+
mappedDf.printSchema
//root
// |-- request_uri: map (nullable = true)
// | |-- key: string
// | |-- value: string (valueContainsNull = true)
mappedDf.schema
//org.apache.spark.sql.types.StructType = StructType(StructField(request_uri,MapType(StringType,StringType,true),true))
And that's what you wanted.
Alternative: If you're not sure if your string conforms, you can try a different variation of the function which succeeds even if the string doesn't conform to assumed format (doesn't contain = or input is an empty String).
def requestUriToMapImproved(s: String): Map[String, String] = {
s.stripPrefix("/i?").split("&").map(elem => {
val pair = elem.split("=")
pair.length match {
case 0 => ("", "") // in case the given string produces an array with no elements e.g. "=".split("=") == Array()
case 1 => (pair(0), "") // in case the given string contains no = and produces a single element e.g. "potato".split("=") == Array("potato")
case _ => (pair(0), pair(1)) // normal case e.g. "potato=masher".split("=") == Array("potato", "masher")
}
}).toMap
}
The following code performs a two phase split process. Because uris donĀ“t have an specific structure you can do it with an UDF like:
val keys = List("aid", "p", "ast", "av", "did", "nt", "ov", "tv", "tz")
def convertToMap(keys: List[String]) = udf {
(in : mutable.WrappedArray[String]) =>
in.foldLeft[Map[String, String]](Map()){ (a, str) =>
keys.flatMap { key =>
val regex = s"""${key}="""
val arr = str.split(regex)
val value = {
if(arr.length == 2) arr(1)
else ""
}
if(!value.isEmpty)
a + (key -> value)
else
a
}.toMap
}
}
df.withColumn("_tmp",
split($"request_uri","""((&)|(\?))"""))
.withColumn("map_result", convertToMap(keys)($"_tmp"))
.select($"map_result")
.show(false)
it gives a MapType column:
+------------------------------------------------------------------------------------------------+
|map_result |
+------------------------------------------------------------------------------------------------+
|Map(aid -> fptplay, ast -> 1582163970763, av -> 4.6.1, did -> 83295772a8fee349) |
|Map(av -> 2.0.18, ov -> 9, tz -> GMT%2B07%3A00, tv -> 1.0.0, p -> fplay-ottbox-2019, nt -> wifi)|
+------------------------------------------------------------------------------------------------+

How to get the current and next element of List of list of options in scala

This question might be duplicated, however, I've not come across any question that answers my problem.
So I have a List[ List[ Option[ Double ] ] ]
With the following data:
var tests = List(
List(Some(313.062468), Some(27.847252)),
List(Some(301.873641), Some(42.884065)),
List(Some(332.373186), Some(53.509768))
)
I'd like to calculate this equation:
def findDifference(oldPrice: Option[Double], newPrice: Option[Double]): Option[Double] = {
return Some(( newPrice.get - oldPrice.get ) / oldPrice.get)
}
on the following:
It's doing the calculation on the element of two lists:
(Some(301.062468) - Some(313.062468)) / Some(313.062468)
(Some(332.373186) - Some(301.873641)) / Some(301.873641)
(Some(42.884065) - Some(27.847252)) / Some(27.847252)
(Some(53.509768) - Some(42.884065)) / Some(42.884065)
The result should return:
#
List(
List(Some(-0.03573991820699504), Some(0.5399747522663995))
List(Some(0.10103414428290529), Some(0.24777742035415723))
)
My code so far
def findDifference(oldPrice: Option[Double], newPrice: Option[Double]): Option[Double] = {
return Some(( newPrice.get - oldPrice.get ) / oldPrice.get)
}
def get_deltas(data: List[List[Option[Double]]]): List[List[Option[Double]]] = {
for {
i <- data
// So this is where I am stuck. I have the current element i, but I need the same index element in the next list
} ( findDifference(i,?) }
My output If I print my I in For-comprehension
List(Some(313.062468), Some(27.847252))
List(Some(301.873641), Some(42.884065))
List(Some(332.373186), Some(53.509768))
Where Am I stuck?
I'm stuck in the fact that I don't know how to get the element of the same index in the list 1 from List 2 and List 3 and do the necessary calculation?
Please help me achieve my result output
Try to play with this:
object OptIdx {
def main(args: Array[String]) {
println(get_deltas(tests))
}
var tests = List(
List(Some(313.062468), Some(27.847252)),
List(Some(301.873641), Some(42.884065)),
List(Some(332.373186), Some(53.509768)))
def findDifference(oldPrice: Option[Double], newPrice: Option[Double]): Option[Double] = {
Some((newPrice.get - oldPrice.get) / oldPrice.get)
}
def get_deltas(data: List[List[Option[Double]]]): List[List[Option[Double]]] = {
(for {
index <- 0 to 1
} yield {
(for {
i <- 0 to data.length - 2
} yield {
findDifference(data(i)(index), data(i + 1)(index))
}).toList
}).toList
}
}
It prints the numbers you want, but two of them are swapped. I'm sure you can figure it out, though.
So your data structure implies that i is a list of two options. The values you need are there already so you can call i(0) and i(1)
I've done a quick and dirty one in the repl:
scala> val tests = List(
| List(Some(313.062468), Some(27.847252)),
| List(Some(301.873641), Some(42.884065)),
| List(Some(332.373186), Some(53.509768))
| )
tests: List[List[Some[Double]]] = List(List(Some(313.062468), Some(27.847252)), List(Some(301.873641), Some(42.884065)), List(Some(332.373186), Some(53.509768)))
And here you can see the value of i when I call:
scala> for { i <- tests } println(i)
List(Some(313.062468), Some(27.847252))
List(Some(301.873641), Some(42.884065))
List(Some(332.373186), Some(53.509768))
So you can call findDifference thus:
scala> def findDifference(oldPrice: Option[Double], newPrice: Option[Double]): Option[Double] = {
| Option((oldPrice.get - newPrice.get) /oldPrice.get)
| }
findDifference: (oldPrice: Option[Double], newPrice: Option[Double])Option[Double]
scala> for { i <- tests } println(findDifference(i(0), i(1)))
Some(0.911048896477747)
Some(0.8579403459740957)
Some(0.8390069648999904)

Create a recursive object graph from a tuple with Scala

I've got a very simple database table called regions where each region may have a parent region.
mysql> describe region;
+---------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-------------+------+-----+---------+-------+
| region_code | char(3) | NO | PRI | NULL | |
| region_name | varchar(50) | NO | | NULL | |
| parent_region | char(3) | YES | MUL | NULL | |
+---------------+-------------+------+-----+---------+-------+
Now I'd like to hydrate this data to a Scala object graph of case classes that each have a parent of the same type.
case class Region(code: String, name: String, parent: Option[Region])
I do this with the following code. It works but it creates duplicate objects which I'd like to avoid if possible.
class RegionDB #Inject() (db: Database) {
def getAll(): Seq[Region] = {
Logger.debug("Getting all regions.")
db.withConnection { implicit conn =>
val parser = for {
code <- str("region_code")
name <- str("region_name")
parent <- str("parent_region").?
} yield (code, name, parent)
val results = SQL("SELECT region_code, region_name, parent_region from region").as(parser.*)
// TODO: Change this so it doesn't create duplicate records
def toRegion(record: (String, String, Option[String])): Region = {
val (code, name, parent) = record
val parentRecord = parent.map(p => results.find(_._1 == p)).getOrElse(None)
new Region(code, name, parentRecord.map(toRegion).orElse(None))
}
val regions = results map toRegion
regions.foreach(r => Logger.debug("region: " + r))
regions
}
}
}
I know how to do this in the imperative way but not the functional way. I know there has got to be an expressive way to do this with recursion but I can't seem to figure it out. Do you know how? Thanks!
I was able to solve this issue by restructuring the Region case class so that the parent region is a var and by adding a collection of children. It would be nice to do this without a var but oh well.
case class Region(code: String, name: String, subRegions: Seq[Region]){
var parentRegion: Option[Region] = None
subRegions.foreach(_.parentRegion = Some(this))
}
The recursion is more natural going from the root down.
def getAll(): Seq[Region] = {
Logger.debug("Getting all regions.")
db.withConnection { implicit conn =>
val parser = for {
code <- str("region_code")
name <- str("region_name")
parent <- str("parent_region").?
} yield (code, name, parent)
val results = SQL("SELECT region_code, region_name, parent_region from region").as(parser.*)
def toRegion(record: (String, String, Option[String])): Region = {
val (regionCode, name, parent) = record
val children = results.filter(_._3 == Some(regionCode)).map(toRegion)
Region(regionCode, name, children)
}
val rootRegions = results filter(_._3 == None) map toRegion
rootRegions.foreach(r => Logger.debug("region: " + r))
rootRegions
}
}