Combine Scala Future[Seq[X]] with Seq[Future[Y]] to produce Future[(X,Seq[Y])] - scala

I have below relationship in my entity classes,
customer -> * Invoice
now I have to implement a method which returns customers with their invoices
type CustomerWithInvoices = (Custimer,Seq[Invoice])
def findCustomerWitnInvoices:Future[Seq[CustomerWithInvoices]] = {
for{
customers <- findCustomers
eventualInvoices: Seq[Future[Seq[Invoice]]] = customers.map(customer => findInvoicesByCustomer(customer))
} yield ???
}
using existing repository methods as below
def findCustomers:Future[Seq[Customers]] = {...}
def findInvoicesByCustomer(customer:Customer):Future[Seq[Invoice]] = {...}
I try to use for expression as above but I can't figure the proper way to do it, as I'm fairly new to Scala, highly appreciate any help..

i would use Future.sequence, the simplified method signature is
sequence takes M[Future[A]] and returns Future[M[A]]
That is what we need to solve your problem, here's the code i would write:
val eventualCustomersWithInvoices: Future[Seq[(Customer, Seq[Invoice])]] = for {
customers <- findCustomers()
eventualInvoices <- Future.sequence(customers.map(customer => findInvoicesByCustomer(customer)))
} yield eventualInvoices
note that the type of eventualInvoices is Future[Seq[(Customer, Seq[Invoice])]] hence Future[Seq[CustomerWithInvoices]]

Related

How to functionally handle a logging side effect

I want to log in the event that a record doesn't have an adjoining record. Is there a purely functional way to do this? One that separates the side effect from the data transformation?
Here's an example of what I need to do:
val records: Seq[Record] = Seq(record1, record2, ...)
val accountsMap: Map[Long, Account] = Map(record1.id -> account1, ...)
def withAccount(accountsMap: Map[Long, Account])(r: Record): (Record, Option[Account]) = {
(r, accountsMap.get(r.id))
}
def handleNoAccounts(tuple: (Record, Option[Account]) = {
val (r, a) = tuple
if (a.isEmpty) logger.error(s"no account for ${record.id}")
tuple
}
def toRichAccount(tuple: (Record, Option[Account]) = {
val (r, a) = tuple
a.map(acct => RichAccount(r, acct))
}
records
.map(withAccount(accountsMap))
.map(handleNoAccounts) // if no account is found, log
.flatMap(toRichAccount)
So there are multiple issues with this approach that I think make it less than optimal.
The tuple return type is clumsy. I have to destructure the tuple in both of the latter two functions.
The logging function has to handle the logging and then return the tuple with no changes. It feels weird that this is passed to .map even though no transformation is taking place -- maybe there is a better way to get this side effect.
Is there a functional way to clean this up?
I could be wrong (I often am) but I think this does everything that's required.
records
.flatMap(r =>
accountsMap.get(r.id).fold{
logger.error(s"no account for ${r.id}")
Option.empty[RichAccount]
}{a => Some(RichAccount(r,a))})
If you're using scala 2.13 or newer you could use tapEach, which takes function A => Unit to apply side effect on every element of function and then passes collection unchanged:
//you no longer need to return tuple in side-effecting function
def handleNoAccounts(tuple: (Record, Option[Account]): Unit = {
val (r, a) = tuple
if (a.isEmpty) logger.error(s"no account for ${record.id}")
}
records
.map(withAccount(accountsMap))
.tapEach(handleNoAccounts) // if no account is found, log
.flatMap(toRichAccount)
In case you're using older Scala, you could provide extension method (updated according to Levi's Ramsey suggestion):
implicit class SeqOps[A](s: Seq[A]) {
def tapEach(f: A => Unit): Seq[A] = {
s.foreach(f)
s
}
}

Concatenate to form list scala

I want to create a list of Test class.
case class Person(name:String)
case class Test (desc:String)
val list =Seq(Person("abc"),Person("def"))
val s = Option(list)
private val elems = scala.collection.mutable.ArrayBuffer[Test]()
val f =for{
l<-s
}yield {
for{
e <-l
} yield elems+=tranform(e)
}
f.toSeq
def tranform(p:Person):Test= {
Test(desc = "Hello "+p.name)
}
can anyone please help with the following
better way to avoid multiple for
I want to get List(Test("Hello abc"),Test("Hello def")) instead of using ArrayBuffer
I don't know why you're wrapping a Seq in an Option; Seq represents the no Persons case perfectly well. Is there a difference between None and Some(Seq.empty[Person]) in your application?
Assuming that you can get by without an Option[Seq[Person]]:
list.map(transform).toList

Passing result of one DBIO into another

I'm new to Slick and I am trying to rewrite the following two queries to work in one transaction. My goal is to
1. check if elements exists
2. return existing element or create it handling autoincrement from MySQL
The two functions are:
def createEmail(email: String): DBIO[Email] = {
// We create a projection of just the email column, since we're not inserting a value for the id column
(emails.map(p => p.email)
returning emails.map(_.id)
into ((email, id) => Email(id, email))
) += email
}
def findEmail(email: String): DBIO[Option[Email]] =
emails.filter(_.email === email).result.headOption
How can I safely chain them, ie. to run first check for existence, return if object already exists and if it does not exist then create it and return the new element in one transaction?
You could use a for comprehension:
def findOrCreate(email: String) = {
(for {
found <- findEmail(email)
em <- found match {
case Some(e) => DBIO.successful(e)
case None => createEmail(email)
}
} yield em).transactionally
}
val result = db.run(findOrCreate("batman#gotham.gov"))
// Future[Email]
With a little help of cats library:
def findOrCreate(email: String): DBIO[Email] = {
OptionT(findEmail(email)).getOrElseF(createEmail(email)).transactionally
}

How to write nested queries in select clause

I'm trying to produce this SQL with SLICK 1.0.0:
select
cat.categoryId,
cat.title,
(
select
count(product.productId)
from
products product
right join products_categories productCategory on productCategory.productId = product.productId
right join categories c on c.categoryId = productCategory.categoryId
where
c.leftValue >= cat.leftValue and
c.rightValue <= cat.rightValue
) as productCount
from
categories cat
where
cat.parentCategoryId = 2;
My most successful attempt is (I dropped the "joins" part, so it's more readable):
def subQuery(c: CategoriesTable.type) = (for {
p <- ProductsTable
} yield(p.id.count))
for {
c <- CategoriesTable
if (c.parentId === 2)
} yield(c.id, c.title, (subQuery(c).asColumn))
which produces the SQL lacking parenthesis in subquery:
select
x2.categoryId,
x2.title,
select count(x3.productId) from products x3
from
categories x2
where x2.parentCategoryId = 2
which is obviously invalid SQL
Any thoughts how to have SLICK put these parenthesis in the right place? Or maybe there is a different way to achieve this?
I never used Slick or ScalaQuery so it was quite an adventure to find out how to achieve this. Slick is very extensible, but the documentation on extending is a bit tricky. It might already exist, but this is what I came up with. If I have done something incorrect, please correct me.
First we need to create a custom driver. I extended the H2Driver to be able to test easily.
trait CustomDriver extends H2Driver {
// make sure we create our query builder
override def createQueryBuilder(input: QueryBuilderInput): QueryBuilder =
new QueryBuilder(input)
// extend the H2 query builder
class QueryBuilder(input: QueryBuilderInput) extends super.QueryBuilder(input) {
// we override the expr method in order to support the 'As' function
override def expr(n: Node, skipParens: Boolean = false) = n match {
// if we match our function we simply build the appropriate query
case CustomDriver.As(column, LiteralNode(name: String)) =>
b"("
super.expr(column, skipParens)
b") as ${name}"
// we don't know how to handle this, so let super hanle it
case _ => super.expr(n, skipParens)
}
}
}
object CustomDriver extends CustomDriver {
// simply define 'As' as a function symbol
val As = new FunctionSymbol("As")
// we override SimpleSql to add an extra implicit
trait SimpleQL extends super.SimpleQL {
// This is the part that makes it easy to use on queries. It's an enrichment class.
implicit class RichQuery[T: TypeMapper](q: Query[Column[T], T]) {
// here we redirect our as call to the As method we defined in our custom driver
def as(name: String) =
CustomDriver.As.column[T](Node(q.unpackable.value), name)
}
}
// we need to override simple to use our version
override val simple: SimpleQL = new SimpleQL {}
}
In order to use it we need to import specific things:
import CustomDriver.simple._
import Database.threadLocalSession
Then, to use it you can do the following (I used the tables from the official Slick documentation in my example).
// first create a function to create a count query
def countCoffees(supID: Column[Int]) =
for {
c <- Coffees
if (c.supID === supID)
} yield (c.length)
// create the query to combine name and count
val coffeesPerSupplier =
for {
s <- Suppliers
} yield (s.name, countCoffees(s.id) as "test")
// print out the name and count
coffeesPerSupplier foreach { case (name, count) =>
println(s"$name has $count type(s) of coffee")
}
The result is this:
Acme, Inc. has 2 type(s) of coffee
Superior Coffee has 2 type(s) of coffee
The High Ground has 1 type(s) of coffee

Filling a Scala immutable Map from a database table

I have a SQL database table with the following structure:
create table category_value (
category varchar(25),
property varchar(25)
);
I want to read this into a Scala Map[String, Set[String]] where each entry in the map is a set of all of the property values that are in the same category.
I would like to do it in a "functional" style with no mutable data (other than the database result set).
Following on the Clojure loop construct, here is what I have come up with:
def fillMap(statement: java.sql.Statement): Map[String, Set[String]] = {
val resultSet = statement.executeQuery("select category, property from category_value")
#tailrec
def loop(m: Map[String, Set[String]]): Map[String, Set[String]] = {
if (resultSet.next) {
val category = resultSet.getString("category")
val property = resultSet.getString("property")
loop(m + (category -> m.getOrElse(category, Set.empty)))
} else m
}
loop(Map.empty)
}
Is there a better way to do this, without using mutable data structures?
If you like, you could try something around
def fillMap(statement: java.sql.Statement): Map[String, Set[String]] = {
val resultSet = statement.executeQuery("select category, property from category_value")
Iterator.continually((resultSet, resultSet.next)).takeWhile(_._2).map(_._1).map{ res =>
val category = res.getString("category")
val property = res.getString("property")
(category, property)
}.toIterable.groupBy(_._1).mapValues(_.map(_._2).toSet)
}
Untested, because I don’t have a proper sql.Statement. And the groupBy part might need some more love to look nice.
Edit: Added the requested changes.
There are two parts to this problem.
Getting the data out of the database and into a list of rows.
I would use a Spring SimpleJdbcOperations for the database access, so that things at least appear functional, even though the ResultSet is being changed behind the scenes.
First, some a simple conversion to let us use a closure to map each row:
implicit def rowMapper[T<:AnyRef](func: (ResultSet)=>T) =
new ParameterizedRowMapper[T]{
override def mapRow(rs:ResultSet, row:Int):T = func(rs)
}
Then let's define a data structure to store the results. (You could use a tuple, but defining my own case class has advantage of being just a little bit clearer regarding the names of things.)
case class CategoryValue(category:String, property:String)
Now select from the database
val db:SimpleJdbcOperations = //get this somehow
val resultList:java.util.List[CategoryValue] =
db.query("select category, property from category_value",
{ rs:ResultSet => CategoryValue(rs.getString(1),rs.getString(2)) } )
Converting the data from a list of rows into the format that you actually want
import scala.collection.JavaConversions._
val result:Map[String,Set[String]] =
resultList.groupBy(_.category).mapValues(_.map(_.property).toSet)
(You can omit the type annotations. I've included them to make it clear what's going on.)
Builders are built for this purpose. Get one via the desired collection type companion, e.g. HashMap.newBuilder[String, Set[String]].
This solution is basically the same as my other solution, but it doesn't use Spring, and the logic for converting a ResultSet to some sort of list is simpler than Debilski's solution.
def streamFromResultSet[T](rs:ResultSet)(func: ResultSet => T):Stream[T] = {
if (rs.next())
func(rs) #:: streamFromResultSet(rs)(func)
else
rs.close()
Stream.empty
}
def fillMap(statement:java.sql.Statement):Map[String,Set[String]] = {
case class CategoryValue(category:String, property:String)
val resultSet = statement.executeQuery("""
select category, property from category_value
""")
val queryResult = streamFromResultSet(resultSet){rs =>
CategoryValue(rs.getString(1),rs.getString(2))
}
queryResult.groupBy(_.category).mapValues(_.map(_.property).toSet)
}
There is only one approach I can think of that does not include either mutable state or extensive copying*. It is actually a very basic technique I learnt in my first term studying CS. Here goes, abstracting from the database stuff:
def empty[K,V](k : K) : Option[V] = None
def add[K,V](m : K => Option[V])(k : K, v : V) : K => Option[V] = q => {
if ( k == q ) {
Some(v)
}
else {
m(q)
}
}
def build[K,V](input : TraversableOnce[(K,V)]) : K => Option[V] = {
input.foldLeft(empty[K,V]_)((m,i) => add(m)(i._1, i._2))
}
Usage example:
val map = build(List(("a",1),("b",2)))
println("a " + map("a"))
println("b " + map("b"))
println("c " + map("c"))
> a Some(1)
> b Some(2)
> c None
Of course, the resulting function does not have type Map (nor any of its benefits) and has linear lookup costs. I guess you could implement something in a similar way that mimicks simple search trees.
(*) I am talking concepts here. In reality, things like value sharing might enable e.g. mutable list constructions without memory overhead.