How transaction works in case of failure - scala

In the method below, how could I rollback all transactions when one fails?
Am I able to write all insert statements in a single transaction?
def save(types: List[admin] ): Unit = {
try {
DB.withTransaction { implicit c =>
val update = SQL("insert IGNORE into table1 (user_id, full_name, user_name) values ({user_id},{full_name},{user_name})")
val batch = (update.asBatch /: types)(
(sql, _type) => sql.addBatchParams(_type.user.id, _type.user.name, _type.user.login_id))
batch.execute
}
DB.withTransaction { implicit c =>
val update1 = SQL("INSERT IGNORE INTO table2 (user_id, role_id, is_enabled) values ({user_id},{role_id},{is_enabled})")
val batch1 = (update1.asBatch /: types)(
(sql, _type) => sql.addBatchParams(_type.user.id, _type.role_id, 1))
batch1.execute
}
DB.withTransaction { implicit c =>
val update2 = SQL("INSERT IGNORE INTO table3 (user_id, bu_id, role_id, is_enabled) values ({user_id},{bu_id},{role_id},{is_enabled})")
val batch2 = (update2.asBatch /: types)(
(sql, _type) => sql.addBatchParams(_type.user.id, 1, _type.role_id, 1))
batch2.execute
}
} catch {
case ex: Exception =>
Logger.error("Exception : " + ex.getMessage)
play.Logger.info("Exception" + ex.getMessage)
}
}

Related

scala load config Map of Map

I need to read from a config file and map the config to a case class .It works fine if i have one table as below
CONFIG
mapping {
target {
oracle = {
type = "oracle"
schema = "orcl"
tableName = "my_table"
query = "select key from my_table where dob='2020-01-01'
}
}
SCALA CODE SNIPPET
val targetConfig:Map[String,QueryEngine] = config.getObject("mapping.target")
.entrySet()
.asScala
.foldLeft(Map.empty[String , QueryEngine]) { case ( acc , entry ) =>
val target = entry.getKey
val targetConfig = entry.getValue match {
case validElement if validElement.valueType() == ConfigValueType.OBJECT => validElement.asInstanceOf[ConfigObject].toConfig
case invalidElement => sys.error("illegal syntax at $invalidElement")
}
targetConfig.getString("type") match {
case "oracle" => acc + (target -> new OracleQueryEngine(vars,target,targetConfig.getString("schema"),targetConfig.getString("tableName"),targetConfig.getString("query"),targetConfig.getString("param")))
case x => sys.error(s"unknow target not defined $targetConfig with $targetConfig")
}
}
NOW i updated CONFIG with MULTIPLE tables in the target mapping.
mapping {
target {
oracle =
emp = {
type = "oracle"
schema = "orcl"
tableName = "emp"
query = "select key from emp where dob='2020-01-01'
}
dept = {
type = "oracle"
schema = "orcl"
tableName = "dept"
query = "select key from dept where dob='2020-01-01'
}
}
}
CODE SNIPPET for the multiple table scenario
This is giving error
Error:(22, 28) type mismatch;
found : scala.collection.mutable.Buffer[scala.collection.immutable.Map[String,QueryEngine]]
required: scala.collection.immutable.Map[String,QueryEngine]
objs.asScala.map { obj =>
CODE SNIPPET
val sourcesConfig: Map[String, QueryEngine] =
config.getObject("mapping.target")
.entrySet()
.asScala
.foldLeft(Map.empty[String, QueryEngine]) { case (acc, entry) =>
val source = entry.getKey
entry.getValue match {
case objs:ConfigList =>
objs.asScala.map { obj =>
val sourceConfig = obj.asInstanceOf[ConfigObject].toConfig
sourceConfig.getString("type") match {
case "oracle" => acc + (source -> new OracleQueryEngine(vars,source,sourceConfig.getString("schema"), sourceConfig.getString("tableName"), sourceConfig.getString("query"), sourceConfig.getString("param")))
case x => sys.error(s"unknown source not defined $source with $sourceConfig")
}
}
}
Wrap "oracle" in curly brackets as it is a JsObject:
mapping {
target {
oracle = {
emp = {
type = "oracle"
schema = "orcl"
tableName = "emp"
query = "select key from emp where dob='2020-01-01'
}
dept = {
type = "oracle"
schema = "orcl"
tableName = "dept"
query = "select key from dept where dob='2020-01-01'
}
}}
}
Does it work now?
If not use for example PureConfig.
Your solution could be as simple as this:
import pureconfig._
import pureconfig.generic.auto._
case class Mappings(mapping: Mapping)
case class Mapping(target: Oracle)
case class Oracle(oracle: Map[String, Map[String, String]])
ConfigSource.default.load[Mappings]

how to insert usert defined type in cassandra by using lagom scala framework

I am using Lagom(scala) framework and i could find any way to save scala case class object in cassandra with has complex Type. so how to i insert cassandra UDT in Lagom scala. and can any one explain hoe to use BoundStatement.setUDTValue() method.
I have tried to do by using com.datastax.driver.mapping.annotations.UDT.
but does not work for me. I have also tried com.datastax.driver.core
Session Interface. but again it does not.
case class LeadProperties(
name: String,
label: String,
description: String,
groupName: String,
fieldDataType: String,
options: Seq[OptionalData]
)
object LeadProperties{
implicit val format: Format[LeadProperties] = Json.format[LeadProperties]
}
#UDT(keyspace = "leadpropertieskeyspace", name="optiontabletype")
case class OptionalData(label: String)
object OptionalData {
implicit val format: Format[OptionalData] = Json.format[OptionalData]
}
my query:----
val optiontabletype= """
|CREATE TYPE IF NOT EXISTS optiontabletype(
|value text
|);
""".stripMargin
val createLeadPropertiesTable: String = """
|CREATE TABLE IF NOT EXISTS leadpropertiestable(
|name text Primary Key,
|label text,
|description text,
|groupname text,
|fielddatatype text,
|options List<frozen<optiontabletype>>
);
""".stripMargin
def createLeadProperties(obj: LeadProperties): Future[List[BoundStatement]] = {
val bindCreateLeadProperties: BoundStatement = createLeadProperties.bind()
bindCreateLeadProperties.setString("name", obj.name)
bindCreateLeadProperties.setString("label", obj.label)
bindCreateLeadProperties.setString("description", obj.description)
bindCreateLeadProperties.setString("groupname", obj.groupName)
bindCreateLeadProperties.setString("fielddatatype", obj.fieldDataType)
here is the problem I am not getting any method for cassandra Udt.
Future.successful(List(bindCreateLeadProperties))
}
override def buildHandler(): ReadSideProcessor.ReadSideHandler[PropertiesEvent] = {
readSide.builder[PropertiesEvent]("PropertiesOffset")
.setGlobalPrepare(() => PropertiesRepository.createTable)
.setPrepare(_ => PropertiesRepository.prepareStatements)
.setEventHandler[PropertiesCreated](ese ⇒
PropertiesRepository.createLeadProperties(ese.event.obj))
.build()
}
I was faced with the same issue and solve it following way:
Define type and table:
def createTable(): Future[Done] = {
session.executeCreateTable("CREATE TYPE IF NOT EXISTS optiontabletype(filed1 text, field2 text)")
.flatMap(_ => session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS leadpropertiestable ( " +
"id TEXT, options list<frozen <optiontabletype>>, PRIMARY KEY (id))"
))
}
Call this method in buildHandler() like this:
override def buildHandler(): ReadSideProcessor.ReadSideHandler[FacilityEvent] =
readSide.builder[PropertiesEvent]("PropertiesOffset")
.setPrepare(_ => prepare())
.setGlobalPrepare(() => {
createTable()
})
.setEventHandler[PropertiesCreated](processPropertiesCreated)
.build()
Then in processPropertiesCreated() I used it like:
private val writePromise = Promise[PreparedStatement] // initialized in prepare
private def writeF: Future[PreparedStatement] = writePromise.future
private def processPropertiesCreated(eventElement: EventStreamElement[PropertiesCreated]): Future[List[BoundStatement]] = {
writeF.map { ps =>
val userType = ps.getVariables.getType("options").getTypeArguments.get(0).asInstanceOf[UserType]
val newValue = userType.newValue().setString("filed1", "1").setString("filed2", "2")
val bindWriteTitle = ps.bind()
bindWriteTitle.setString("id", eventElement.event.id)
bindWriteTitle.setList("options", eventElement.event.keys.map(_ => newValue).toList.asJava) // todo need to convert, now only stub
List(bindWriteTitle)
}
}
And read it like this:
def toFacility(r: Row): LeadPropertiesTable = {
LeadPropertiesTable(
id = r.getString(fId),
options = r.getList("options", classOf[UDTValue]).asScala.map(udt => OptiontableType(field1 = udt.getString("field1"), field2 = udt.getString("field2"))
)
}
My prepare() function:
private def prepare(): Future[Done] = {
val f = session.prepare("INSERT INTO leadpropertiestable (id, options) VALUES (?, ?)")
writePromise.completeWith(f)
f.map(_ => Done)
}
This is not a very well written code, but I think will help to proceed work.

Slick3 with SQLite - autocommit seems to not be working

I'm trying to write some basic queries with Slick for SQLite database
Here is my code:
class MigrationLog(name: String) {
val migrationEvents = TableQuery[MigrationEventTable]
lazy val db: Future[SQLiteDriver.backend.DatabaseDef] = {
val db = Database.forURL(s"jdbc:sqlite:$name.db", driver = "org.sqlite.JDBC")
val setup = DBIO.seq(migrationEvents.schema.create)
val createFuture = for {
tables <- db.run(MTable.getTables)
createResult <- if (tables.length == 0) db.run(setup) else Future.successful()
} yield createResult
createFuture.map(_ => db)
}
val addEvent: (String, String) => Future[String] = (aggregateId, eventType) => {
val id = java.util.UUID.randomUUID().toString
val command = DBIO.seq(migrationEvents += (id, aggregateId, None, eventType, "CREATED", System.currentTimeMillis, None))
db.flatMap(_.run(command).map(_ => id))
}
val eventSubmitted: (String, String) => Future[Unit] = (id, batchId) => {
val q = for { e <- migrationEvents if e.id === id } yield (e.batchId, e.status, e.updatedAt)
val updateAction = q.update(Some(batchId), "SUBMITTED", Some(System.currentTimeMillis))
db.map(_.run(updateAction))
}
val eventMigrationCompleted: (String, String, String) => Future[Unit] = (batchId, id, status) => {
val q = for { e <- migrationEvents if e.batchId === batchId && e.id === id} yield (e.status, e.updatedAt)
val updateAction = q.update(status, Some(System.currentTimeMillis))
db.map(_.run(updateAction))
}
val allEvents = () => {
db.flatMap(_.run(migrationEvents.result))
}
}
Here is how I'm using it:
val migrationLog = MigrationLog("test")
val events = for {
id <- migrationLog.addEvent("aggregateUserId", "userAccessControl")
_ <- migrationLog.eventSubmitted(id, "batchID_generated_from_idam")
_ <- migrationLog.eventMigrationCompleted("batchID_generated_from_idam", id, "Successful")
events <- migrationLog.allEvents()
} yield events
events.map(_.foreach(event => event match {
case (id, aggregateId, batchId, eventType, status, submitted, updatedAt) => println(s"$id $aggregateId $batchId $eventType $status $submitted $updatedAt")
}))
The idea is to add event first, then update it with batchId (which also updates status) and then update the status when the job is done. events should contain events with status Successful.
What happens is that after running this code it prints events with status SUBMITTED. If I wait a while and do the same allEvents query or just go and check the db from command line using sqlite3 then it's updated correctly.
I'm properly waiting for futures to be resolved before starting the next operation, auto-commit should be enabled by default.
Am I missing something?
Turns out the problem was with db.map(_.run(updateAction)) which returns Future[Future[Int]] which means that the command was not finished by the time I tried to run another query.
Replacing it with db.flatMap(_.run(updateAction)) solved the issue.

Insert if not exists in Slick 3.0.0

I'm trying to insert if not exists, I found this post for 1.0.1, 2.0.
I found snippet using transactionally in the docs of 3.0.0
val a = (for {
ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
val f: Future[Unit] = db.run(a)
I'm struggling to write the logic from insert if not exists with this structure. I'm new to Slick and have little experience with Scala. This is my attempt to do insert if not exists outside the transaction...
val result: Future[Boolean] = db.run(products.filter(_.name==="foo").exists.result)
result.map { exists =>
if (!exists) {
products += Product(
None,
productName,
productPrice
)
}
}
But how do I put this in the transactionally block? This is the furthest I can go:
val a = (for {
exists <- products.filter(_.name==="foo").exists.result
//???
// _ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
Thanks in advance
It is possible to use a single insert ... if not exists query. This avoids multiple database round-trips and race conditions (transactions may not be enough depending on isolation level).
def insertIfNotExists(name: String) = users.forceInsertQuery {
val exists = (for (u <- users if u.name === name.bind) yield u).exists
val insert = (name.bind, None) <> (User.apply _ tupled, User.unapply)
for (u <- Query(insert) if !exists) yield u
}
Await.result(db.run(DBIO.seq(
// create the schema
users.schema.create,
users += User("Bob"),
users += User("Bob"),
insertIfNotExists("Bob"),
insertIfNotExists("Fred"),
insertIfNotExists("Fred"),
// print the users (select * from USERS)
users.result.map(println)
)), Duration.Inf)
Output:
Vector(User(Bob,Some(1)), User(Bob,Some(2)), User(Fred,Some(3)))
Generated SQL:
insert into "USERS" ("NAME","ID") select ?, null where not exists(select x2."NAME", x2."ID" from "USERS" x2 where x2."NAME" = ?)
Here's the full example on github
This is the version I came up with:
val a = (
products.filter(_.name==="foo").exists.result.flatMap { exists =>
if (!exists) {
products += Product(
None,
productName,
productPrice
)
} else {
DBIO.successful(None) // no-op
}
}
).transactionally
It's is a bit lacking though, for example it would be useful to return the inserted or existing object.
For completeness, here the table definition:
case class DBProduct(id: Int, uuid: String, name: String, price: BigDecimal)
class Products(tag: Tag) extends Table[DBProduct](tag, "product") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc) // This is the primary key column
def uuid = column[String]("uuid")
def name = column[String]("name")
def price = column[BigDecimal]("price", O.SqlType("decimal(10, 4)"))
def * = (id, uuid, name, price) <> (DBProduct.tupled, DBProduct.unapply)
}
val products = TableQuery[Products]
I'm using a mapped table, the solution works also for tuples, with minor changes.
Note also that it's not necessary to define the id as optional, according to the documentation it's ignored in insert operations:
When you include an AutoInc column in an insert operation, it is silently ignored, so that the database can generate the proper value
And here the method:
def insertIfNotExists(productInput: ProductInput): Future[DBProduct] = {
val productAction = (
products.filter(_.uuid===productInput.uuid).result.headOption.flatMap {
case Some(product) =>
mylog("product was there: " + product)
DBIO.successful(product)
case None =>
mylog("inserting product")
val productId =
(products returning products.map(_.id)) += DBProduct(
0,
productInput.uuid,
productInput.name,
productInput.price
)
val product = productId.map { id => DBProduct(
id,
productInput.uuid,
productInput.name,
productInput.price
)
}
product
}
).transactionally
db.run(productAction)
}
(Thanks Matthew Pocock from Google group thread, for orienting me to this solution).
I've run into the solution that looks more complete. Section 3.1.7 More Control over Inserts of the Essential Slick book has the example.
At the end you get smth like:
val entity = UserEntity(UUID.random, "jay", "jay#localhost")
val exists =
users
.filter(
u =>
u.name === entity.name.bind
&& u.email === entity.email.bind
)
.exists
val selectExpression = Query(
(
entity.id.bind,
entity.name.bind,
entity.email.bind
)
).filterNot(_ => exists)
val action = usersDecisions
.map(u => (u.id, u.name, u.email))
.forceInsertQuery(selectExpression)
exec(action)
// res17: Int = 1
exec(action)
// res18: Int = 0
according to the slick 3.0 manual insert query section (http://slick.typesafe.com/doc/3.0.0/queries.html), the inserted values can be returned with id as below:
def insertIfNotExists(productInput: ProductInput): Future[DBProduct] = {
val productAction = (
products.filter(_.uuid===productInput.uuid).result.headOption.flatMap {
case Some(product) =>
mylog("product was there: " + product)
DBIO.successful(product)
case None =>
mylog("inserting product")
(products returning products.map(_.id)
into ((prod,id) => prod.copy(id=id))) += DBProduct(
0,
productInput.uuid,
productInput.name,
productInput.price
)
}
).transactionally
db.run(productAction)
}

spark join operation based on two columns

I'm trying to join two datasets based on two columns. It works until I use one column but fails with below error
:29: error: value join is not a member of org.apache.spark.rdd.RDD[(String, String, (String, String, String, String, Double))]
val finalFact = fact.join(dimensionWithSK).map { case(nk1,nk2, ((parts1,parts2,parts3,parts4,amount), (sk, prop1,prop2,prop3,prop4))) => (sk,amount) }
Code :
import org.apache.spark.rdd.RDD
def zipWithIndex[T](rdd: RDD[T]) = {
val partitionSizes = rdd.mapPartitions(p => Iterator(p.length)).collect
val ranges = partitionSizes.foldLeft(List((0, 0))) { case(accList, count) =>
val start = accList.head._2
val end = start + count
(start, end) :: accList
}.reverse.tail.toArray
rdd.mapPartitionsWithIndex( (index, partition) => {
val start = ranges(index)._1
val end = ranges(index)._2
val indexes = Iterator.range(start, end)
partition.zip(indexes)
})
}
val dimension = sc.
textFile("dimension.txt").
map{ line =>
val parts = line.split("\t")
(parts(0),parts(1),parts(2),parts(3),parts(4),parts(5))
}
val dimensionWithSK =
zipWithIndex(dimension).map { case((nk1,nk2,prop3,prop4,prop5,prop6), idx) => (nk1,nk2,(prop3,prop4,prop5,prop6,idx + nextSurrogateKey)) }
val fact = sc.
textFile("fact.txt").
map { line =>
val parts = line.split("\t")
// we need to output (Naturalkey, (FactId, Amount)) in
// order to be able to join with the dimension data.
(parts(0),parts(1), (parts(2),parts(3), parts(4),parts(5),parts(6).toDouble))
}
val finalFact = fact.join(dimensionWithSK).map { case(nk1,nk2, ((parts1,parts2,parts3,parts4,amount), (sk, prop1,prop2,prop3,prop4))) => (sk,amount) }
Request someone's help here..
Thanks
Sridhar
If you look at the signature of join it works on an RDD of pairs:
def join[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, W))]
You have a triple. I guess your trying to join on the first 2 elements of the tuple, and so you need to map your triple to a pair, where the first element of the pair is a pair containing the first two elements of the triple, e.g. for any Types V1 and V2
val left: RDD[(String, String, V1)] = ??? // some rdd
val right: RDD[(String, String, V2)] = ??? // some rdd
left.map {
case (key1, key2, value) => ((key1, key2), value)
}
.join(
right.map {
case (key1, key2, value) => ((key1, key2), value)
})
This will give you an RDD of the form RDD[(String, String), (V1, V2)]
rdd1 Schema :
field1,field2, field3, fieldX,.....
rdd2 Schema :
field1, field2, field3, fieldY,.....
val joinResult = rdd1.join(rdd2,
Seq("field1", "field2", "field3"), "outer")
joinResult schema :
field1, field2, field3, fieldX, fieldY, ......
val emp = sc.
textFile("emp.txt").
map { line =>
val parts = line.split("\t")
// we need to output (Naturalkey, (FactId, Amount)) in
// order to be able to join with the dimension data.
((parts(0), parts(2)),parts(1))
}
val emp_new = sc.
textFile("emp_new.txt").
map { line =>
val parts = line.split("\t")
// we need to output (Naturalkey, (FactId, Amount)) in
// order to be able to join with the dimension data.
((parts(0), parts(2)),parts(1))
}
val finalemp =
emp_new.join(emp).
map { case((nk1,nk2) ,((parts1), (val1))) => (nk1,parts1,val1) }