Race condition in Slick Code - scala

I have written this slick DAO and its unit test in specs2.
My code has race conditions. When I run the same tests, I get different outputs.
The race conditions exist even though in both the functions I do Await.result(future, Duration.Inf)
DAO
package com.example
import slick.backend.DatabasePublisher
import slick.driver.H2Driver.api._
import scala.concurrent.ExecutionContext.Implicits.global
import slick.jdbc.meta._
import scala.concurrent._
import ExecutionContext.Implicits.global
import scala.concurrent.duration._
case class Person(id: Int, firstname: String, lastname: String)
class People(tag: Tag) extends Table[Person](tag, "PEOPLE") {
def id = column[Int]("PERSON_ID", O.PrimaryKey)
def firstname = column[String]("PERSON_FIRST_NAME")
def lastname = column[String]("PERSON_LAST_NAME")
def * = (id, firstname, lastname) <> (Person.tupled, Person.unapply _)
}
object PersonDAO {
private def createList(numRows: Int) : List[Person] = {
def recFunc(counter: Int, result: List[Person]) : List[Person] = {
counter match {
case x if x <= numRows => recFunc(counter + 1, Person(counter, "test" + counter, "user" + counter) :: result)
case _ => result
}
}
recFunc(1, List[Person]())
}
val db = Database.forConfig("test1")
val people = TableQuery[People]
def createAndPopulate(numRows: Int) = {
val action1 = people.schema.create
val action2 = people ++= Seq(createList(numRows) : _* )
val combined = db.run(action1 andThen action2)
val future1 = combined.map { result =>
result map {x =>
println(s"number of rows inserted $x")
x
}
}
Await.result(future1, Duration.Inf).getOrElse(0)
}
def printAll() = {
val a = people.result
val b = db.run(a)
val y = b map { result =>
result map {x => x}
}
val z = Await.result(y, Duration.Inf)
println(z)
println(z.length)
z
}
}
Unit Test
import org.specs2.mutable._
import com.example._
class HelloSpec extends Specification {
"This usecase " should {
"should insert rows " in {
val x = PersonDAO.createAndPopulate(100)
x === 100
}
}
"This usecase " should {
"return 100 rows" in {
val x = PersonDAO.printAll()
val y = PersonDAO.printAll()
y.length === 100
}
}
}
When I run this same code using activator test I see 2 different types of outputs on different runs
sometimes the code gets exception
number of rows inserted 100
[info] HelloSpec
[info]
[info] This usecase should
[info] + should insert rows
[info]
[info] This usecase should
[info] ! return 100 rows
[error] JdbcSQLException: : Table PEOPLE not found; SQL statement:
[error] select x2."PERSON_ID", x2."PERSON_FIRST_NAME", x2."PERSON_LAST_NAME" from "PEOPLE" x2 [42S02-60] (Message.java:84)
[error] org.h2.message.Message.getSQLException(Message.java:84)
[error] org.h2.message.Message.getSQLException(Message.java:88)
[error] org.h2.message.Message.getSQLException(Message.java:66)
Sometimes the 1st function call returns 0 rows and the 2nd function call returns 100 values
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
number of rows inserted 100
Vector()
0
Vector(Person(100,test100,user100), Person(99,test99,user99), Person(98,test98,user98), Person(97,test97,user97), Person(96,test96,user96), Person(95,test95,user95), Person(94,test94,user94), Person(93,test93,user93), Person(92,test92,user92), Person(91,test91,user91), Person(90,test90,user90), Person(89,test89,user89), Person(88,test88,user88), Person(87,test87,user87), Person
I don't understand why does my code have these race conditions because I block on future in each method.

Your assumption that two test cases should run serial, one after the other is not right. The test cases are running parallel. Just use sequential to verify that thats the case.

Related

Apache Flink (Scala) Rate-Balanced Source Function

In advance, I know this question is a little long; but I did my best to simplify it and make it approachable. Please provide feedback and I will try to act on it.
I am new to Flink, and I am trying to use it for a pipeline that will update each item of a large (TBs) dataset continuously. I wish to update some higher priority items more often but want to update all items as fast as possible.
Would it be possible to have a different source for each priority Tier (e.g. high, medium, low) to start the update process and read from the higher-priority sources more often? The other approach I've thought of is a custom SourceFunction that would have a reader for each file and emit according to the rates I set. The first approach didn't seem feasable, so I am trying the second but am stuck.
This is what I've tried so far:
import scala.collection.mutable
import org.apache.flink.streaming.api.functions.source.SourceFunction
import org.apache.hadoop.fs.{FileSystem, Path}
import java.util.concurrent.locks.ReadWriteLock
import java.util.concurrent.locks.ReentrantReadWriteLock
import java.util.concurrent.atomic.AtomicBoolean
/** Implements logic to ensure higher-priority values are pushed more often.
*
* The relative priority is the relative run frequency. For example, if tier C
* is half the priority of tier B, then C is pushed half as often.
*
* Type Parameters: \- T: Tier Type (e.g. enum) \- V: Emitted
* Value Type (e.g. ItemToUpdate)
*
* How to read objects from CSV rows:
* - https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/formats/csv/#:~:text=Flink%20supports%20reading%20CSV%20files%20using%20CsvReaderFormat.%20The,configuration%20for%20the%20CSV%20schema%20and%20parsing%20options.
*
* How to read objects from Parquet:
* - https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/formats/parquet/#:~:text=Flink%20supports%20reading%20Parquet%20files%2C%20producing%20Flink%20RowData,to%20your%20project%3A%20%3Cdependency%3E%20%3CgroupId%3Eorg.apache.flink%3C%2FgroupId%3E%20%3CartifactId%3Eflink-parquet%3C%2FartifactId%3E%20%3Cversion%3E1.16.0%3C%2Fversion%3E%20%3C%2Fdependency%3E
*
*
*
*/
abstract class TierPriorityEmitter[T, V](
val emitRates: mutable.Map[T, Float],
val updateCounts: mutable.Map[T, Long],
val iterators: mutable.Map[T, Iterator[V]]
) extends SourceFunction[V] {
#transient
private val running = new AtomicBoolean(false)
def this(
emitRates: Map[T, Float]
) {
this(
mutable.Map() ++= emitRates,
mutable.Map() ++= (for ((k, v) <- emitRates) yield (k, 0L)).toMap,
mutable.Map.empty[T, Iterator[V]]
)
for ((k, v) <- emitRates) {
val it = getTierIterator(k)
if (it.hasNext) {
iterators.put(k, it)
} else {
updateCounts.remove(k)
this.emitRates.remove(k)
}
}
// rescale rates in case any dropped or priorities given instead
val s = this.emitRates.foldLeft(0.0)(_ + _._2).asInstanceOf[Float] // sum
for ((k, v) <- this.emitRates) {
this.emitRates.put(k, v / s)
}
}
/** Return iterator for reading a specific priority tier from source file.
* Called again every pass over each priority tier.
*
* For example, may return an iterator to a specific file or an iterator that
* filters to the specified tier. Must avoid storing the entire file in memory
* as it is prohibitively large. TODO FIXME how to read?
*/
def getTierIterator(tier: T): Iterator[V] = ???
/** Determine which class of road should be emitted next.
*/
def nextToEmit(): T = {
/*
```python
total_updates = sum(updateCounts.values())
if total_updates == 0:
return max(emitRates.keys(), key=lambda k: emitRates[k])
actualEmitRates = {k: v/total_updates for k, v in updateCounts}
return min(emitRates.keys(), key=lambda k: actualEmitRates[k] - emitRates[k])
```
*/
val totalUpdates = updateCounts.foldLeft(0.0)(_ + _._2) // sum
if (totalUpdates == 0) {
return emitRates.maxBy(_._2)._1
}
val actualEmitRates =
(for ((k, v) <- updateCounts) yield (k, v / totalUpdates)).toMap
return emitRates.minBy(t => actualEmitRates(t._1) - t._2)._1
}
/** Emit to trigger updates.
*
* #param ctx
*/
override def run(ctx: SourceFunction.SourceContext[V]): Unit = {
running.set(true)
while (running.get()) {
val tier = nextToEmit()
val it = iterators(tier)
// zero length iterators are removed in the constructor
ctx.collect(it.next())
updateCounts.put(tier, updateCounts(tier) + 1)
if (!it.hasNext) {
iterators.put(tier, getTierIterator(tier))
}
}
}
override def cancel(): Unit = {
running.set(false)
}
}
I have this unit test.
import org.scalatest.flatspec.AnyFlatSpec
import org.scalatest.matchers.should.Matchers
import org.scalatest.BeforeAndAfter
import org.apache.flink.api.common.time.Time
import org.apache.flink.streaming.util.SourceOperatorTestHarness
import org.apache.flink.api.common.typeinfo.TypeInformation
import org.apache.flink.api.common.typeinfo.TypeHint
import org.apache.flink.streaming.api.watermark.Watermark
import org.apache.flink.streaming.api.operators.SourceOperatorFactory
import org.apache.flink.api.common.eventtime.WatermarkStrategy
import org.apache.flink.runtime.operators.testutils.MockEnvironmentBuilder
import org.apache.flink.api.connector.source.Source
import org.apache.flink.api.connector.source.SourceReader
import scala.collection.mutable.Queue
import org.apache.flink.test.util.MiniClusterWithClientResource
import org.apache.flink.test.util.MiniClusterResourceConfiguration
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
/** Class for testing abstract TierPriorityEmitter
*/
class StringPriorityEmitter(priorities: Map[Char, Float]) extends TierPriorityEmitter[Char, String](priorities) with Serializable {
override def getTierIterator(tier: Char): Iterator[String] = {
return (for (i <- (1 to 9)) yield tier + i.toString()).iterator
}
}
class TestTierPriorityEmitter
extends AnyFlatSpec
with Matchers with BeforeAndAfter {
val flinkCluster = new MiniClusterWithClientResource(
new MiniClusterResourceConfiguration.Builder()
.setNumberSlotsPerTaskManager(1)
.setNumberTaskManagers(1)
.build
)
before {
flinkCluster.before()
}
after {
flinkCluster.after()
}
"TierPriorityEmitter" should "emit according to priority" in {
// be sure emit rates are floats and sum to 1
val emitter: TierPriorityEmitter[Char, String] = new StringPriorityEmitter(Map('A' -> 3f, 'B' -> 2f, 'C' -> 1f))
// should set emit rates based on our priorities
(emitter.emitRates('A') > emitter.emitRates('B') && emitter.emitRates('B') > emitter.emitRates('C') && emitter.emitRates('C') > 0) shouldBe(true)
(emitter.emitRates('A') + emitter.emitRates('B') + emitter.emitRates('C')) shouldEqual(1.0)
(emitter.emitRates('A') / emitter.emitRates('C')) shouldEqual(3.0)
// should output according to the assigned rates
// TODO use a SourceOperatorTestHarness instead. I couldn't get one to work.
val res = new Queue[String]()
for (_ <- 1 to 15) {
val tier = emitter.nextToEmit()
val it = emitter.iterators(tier)
// zero length iterators are removed in the constructor
res.enqueue(it.next())
emitter.updateCounts.put(tier, emitter.updateCounts(tier) + 1)
if (!it.hasNext) {
emitter.iterators.put(tier, emitter.getTierIterator(tier))
}
}
res shouldEqual(Queue("A1", "B1", "C1", "A2", "B2", "A3", "B3", "A4", "C2", "A5", "B4", "A6", "B5", "A7", "C3"))
}
"TierPriorityEmitter" should "be a source" in {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1) // FIXME not enough resources for 2
val emitter: TierPriorityEmitter[Char, String] = new StringPriorityEmitter(Map('A' -> 3, 'B' -> 2, 'C' -> 1))
// FIXME not serializable
val source = env.addSource(
emitter
)
source.returns(TypeInformation.of(new TypeHint[String]() {}))
val sink = new TestSink[String]("Strings")
source.addSink(sink)
// Execute program, beginning computation.
env.execute("Test TierPriorityEmitter")
sink.getResults should contain allOf ("A1", "B1", "C1", "A2", "B2", "A3", "B3", "A4", "C2", "A5", "B4", "A6", "B5", "A7", "C3")
}
}
There are a couple obvious problems that I can't figure out how to resolve:
How to read the input files to implement the TierPriorityEmitter as a working (serializable) SourceFunction?
I know that Iterators probably aren't the right class to use, but I am not sure what to try next. I am not very experienced with Scala.
Optimization advice?
Although the mutex leads to synchronization rate balancing should still result in the best downstream performance as later steps will be slower. Any advice on how to improve this would be appreciated.

Chisel Passing Enum type as IO

This is a related topic on chisel enum I have already looked at chisel "Enum(UInt(), 5)" failed
I am building a RISC-V in chisel and am running into a roadblock. I would like to abstract the ALU opcode from a combination of opcode, funct3, and funct7 to the actual ALU operation. Below I have a SystemVerilog module that shows the type of behavior I would like to emulate
// Defined in a types file
typedef enum bit [2:0] {
alu_add = 3'b000,
alu_sll = 3'b001,
alu_sra = 3'b010,
alu_sub = 3'b011,
alu_xor = 3'b100,
alu_srl = 3'b101,
alu_or = 3'b110,
alu_and = 3'b111
} alu_ops;
module alu
(
input alu_ops aluop,
input [31:0] a, b,
output logic [31:0] f
);
always_comb
begin
unique case (aluop)
alu_add: f = a + b;
alu_sll: f = a << b[4:0];
alu_sra: f = $signed(a) >>> b[4:0];
alu_sub: f = a - b;
alu_xor: f = a ^ b;
alu_srl: f = a >> b[4:0];
alu_or: f = a | b;
alu_and: f = a & b;
endcase
end
endmodule : alu
This is the current chisel file that I am using
AluType.scala
import chisel3._
import chisel3.Driver
import chisel3.experimental.ChiselEnum
package ALUType {
object AluOP extends ChiselEnum {
// type AluOP = Value
val ADDI, SLTI, SLTIU, XORI, ORI, ANDI, SLLI, SLRI, SRAI, ADD, SUB, SLL, SLT, SLTU, XOR, SRL, SRA, OR, AND = Value
}
}
AluFile.scala
import chisel3._
import chisel3.Driver
import chisel3.experimental.ChiselEnum
import ALUType.AluOP._
class ALUFile(val dl_size: Int, val op_size: Int, val funct3_size: Int, val funct7_size: Int) extends Module {
val io = IO(new Bundle {
val val_a = Input(UInt(dl_size.W))
val val_b = Input(UInt(dl_size.W))
val aluop = Input(ALUType.AluOP.Type)
//val opcode = Input(UInt(op_size.W))
//val funct3 = Input(UInt(funct3_size.W))
//val funct7 = Input(UInt(funct7_size.W))
val val_out = Output(UInt(dl_size.W))
})
// Actual function
}
This is the result of the sbt run
$ sbt run
[info] welcome to sbt 1.4.7 (Oracle Corporation Java 1.8.0_281)
[info] loading project definition from C:\Chisel\Test1\RegFile\project
[info] loading settings for project regfile from build.sbt ...
[info] set current project to regfile (in build file:/C:/Chisel/Test1/RegFile/)
[info] compiling 1 Scala source to C:\Chisel\Test1\RegFile\target\scala-2.12\classes ...
[error] C:\Chisel\Test1\RegFile\src\main\scala\ALUFile.scala:43:30: inferred type arguments [ALUType.AluOP.Type.type] do not conform to method apply's type parameter bounds [T <: chisel3.Data]
[error] val aluop = Input(Wire(ALUType.AluOP.Type))
[error] ^
[error] C:\Chisel\Test1\RegFile\src\main\scala\ALUFile.scala:43:49: type mismatch;
[error] found : ALUType.AluOP.Type.type
[error] required: T
[error] val aluop = Input(Wire(ALUType.AluOP.Type))
[error] ^
[error] two errors found
[error] (Compile / compileIncremental) Compilation failed
[error] Total time: 4 s, completed Feb 11, 2021 8:35:18 PM
Do I just need to assign it to a UInt higher up, then decode it again? Seems silly to have to encode, then decode just to pass it from one module to the next. Is there a way to get AluOP.Type to conform to T? I would have thought that it would simply because it is a ChiselEnum.
EDIT :: 1
I tried enumerating with UInt but it says that is a non-literal type
[info] running ALUFile
[info] [0.000] Elaborating design...
[error] chisel3.internal.ChiselException: AluOp defined with a non-literal type
[error] ...
[error] at AluOp$.<init>(ALUFile.scala:41)
[error] at AluOp$.<clinit>(ALUFile.scala)
[error] at ALUFile$$anon$1.<init>(ALUFile.scala:49)
[error] at ALUFile.<init>(ALUFile.scala:46)
[error] at ALUFile$.$anonfun$new$6(ALUFile.scala:80)
object AluOp extends ChiselEnum {
val addi, slti, sltiu, xori, ori, andi, slli, slri, srai, add, sub, sll, slt, sltu, xor, srl, sra, or, and = Value(UInt()) // Line where error occurs
}
import AluOp._
class ALUFile(val dl_size: Int, val op_size: Int, val funct3_size: Int, val funct7_size: Int) extends Module {
val io = IO(new Bundle {
val val_a = Input(UInt(dl_size.W))
val val_b = Input(UInt(dl_size.W))
val aluop = Input( AluOp() )
val val_out = Output(UInt(dl_size.W))
})
switch (io.aluop) {
is (addi) {
io.val_out := 1.U
}
is (slti) {
io.val_out := 2.U
}
}
// Output result
io.val_out := 0.U
}
From https://github.com/chipsalliance/chisel3/blob/dd6871b8b3f2619178c2a333d9d6083805d99e16/src/test/scala/chiselTests/StrongEnum.scala
The only example they have for using enum in a switch statement, but they manually map values to the type WHICH IS NOT AN ENUM!!!
object StrongEnumFSM {
object State extends ChiselEnum {
val sNone, sOne1, sTwo1s = Value
val correct_annotation_map = Map[String, BigInt]("sNone" -> 0, "sOne1" -> 1, "sTwo1s" -> 2)
}
}
class StrongEnumFSM extends Module {
import StrongEnumFSM.State
import StrongEnumFSM.State._
// This FSM detects two 1's one after the other
val io = IO(new Bundle {
val in = Input(Bool())
val out = Output(Bool())
val state = Output(State())
})
val state = RegInit(sNone)
io.out := (state === sTwo1s)
io.state := state
switch (state) {
is (sNone) {
when (io.in) {
state := sOne1
}
}
is (sOne1) {
when (io.in) {
state := sTwo1s
} .otherwise {
state := sNone
}
}
is (sTwo1s) {
when (!io.in) {
state := sNone
}
}
}
}
One solution that might exist can be found here https://github.com/chipsalliance/chisel3/issues/885 where he defined his own Scala object and allowed for the call to return a UInt.
Additionally if I just use Enum, I can get it to compile which may just be the best solution for now. I would like to see a ChiselEnum be able to easily define states or operations at UInts and be able to pass them as IO so that I can get away from using numbers to define states and make them much more readable.
object AluOp {
val addi :: slti :: sltiu :: xori :: ori :: andi :: slli :: slri :: srai :: add :: sub :: sll :: slt :: sltu :: xor :: srl :: sra :: or :: and :: Nil = Enum(19)
}
import AluOp._
class ALUFile(val dl_size: Int, val op_size: Int, val funct3_size: Int, val funct7_size: Int) extends Module {
val io = IO(new Bundle {
val val_a = Input(UInt(dl_size.W))
val val_b = Input(UInt(dl_size.W))
// val aluop = Input( UInt(AluOp.getWidth.W) )
val aluop = Input(UInt(5.W))
// val opcode = Input(UInt(op_size.W))
// val funct3 = Input(UInt(funct3_size.W))
// val funct7 = Input(UInt(funct7_size.W))
val val_out = Output(UInt(dl_size.W))
})
// val reg_last = RegNext()
switch (io.aluop) {
is (addi) {
io.val_out := 1.U
}
is (slti) {
io.val_out := 2.U
}
}
I think all you need to do is use
val aluop = Input(AluOP())
Some simple example code can be found in the chisel3 unit tests
OP Edit ::
By adding a mapping:
import chisel3._
import chisel3.Driver
import chisel3.util._
import chisel3.experimental.ChiselEnum
package ALUType {
// object AluOp {
// val addi :: slti :: sltiu :: xori :: ori :: andi :: slli :: slri :: srai :: add :: sub :: sll :: slt :: sltu :: xor :: srl :: sra :: or :: and :: Nil = Enum(19)
// }
object AluOp extends ChiselEnum {
val add, sub, sll, slt, sltu, xor, srl, sra, or, and = Value
val correct_annotation_map = Map[String, UInt](
"add" -> 0.U,
"sub" -> 1.U,
"sll" -> 2.U,
"slt" -> 3.U,
"sltu" -> 4.U,
"xor" -> 5.U,
"srl" -> 6.U,
"sra" -> 7.U,
"or" -> 8.U,
"and" -> 9.U
)
}
}
The value can be passed as an input:
import ALUType.AluOp._
class ALUFile(val dl_size: Int, val op_size: Int, val funct3_size: Int, val funct7_size: Int) extends Module {
import ALUType._
import ALUType.AluOp._
val io = IO(new Bundle {
val val_a = Input(UInt(dl_size.W))
val val_b = Input(UInt(dl_size.W))
val aluop = Input( AluOp() )
// val aluop = Input(UInt(5.W))
// val opcode = Input(UInt(op_size.W))
// val funct3 = Input(UInt(funct3_size.W))
// val funct7 = Input(UInt(funct7_size.W))
val val_out = Output(UInt(dl_size.W))
})
// switch (io.aluop) {
// is (add) {
// io.val_out := io.val_a + io.val_b
// }
// is (slt) {
// io.val_out := 2.U
// }
// }
switch (io.aluop) {
is (add) {
io.val_out := io.val_a + io.val_b
}
is (slt) {
io.val_out := 2.U
}
}
// Output result
io.val_out := 0.U
}
This is still not ideal as I would like to not have to manually map the strings to UInt values, but it is what it is. Maybe a Scala foreach loop could take the tedious assign out, who knows.

Scala program using futures is not terminating

I am trying to learn concurrency in Scala and using Scala futures to generate a dataset with random string. I want to create an application which should generate a file with any number of records and it should be scalable.
Code:
import java.util.concurrent.{ExecutorService, Executors}
import scala.util.{Failure, Random, Success}
import scala.concurrent.duration._
object datacreator {
implicit val ec: ExecutionContext = new ExecutionContext {
val threadPool: ExecutorService = Executors.newFixedThreadPool(4)
def execute(runnable: Runnable) {
threadPool.submit(runnable)
}
def reportFailure(t: Throwable) {}
}
def getRecord : String = {
"Random string"
}
def main(args: Array[String]): Unit = {
val filename = args(0)
val number_of_records = args(1)
val file_Object = new FileWriter(filename, true)
val data: Future[Iterable[String]] = Future {
for (i <- 1 to number_of_records.toInt)
yield getRecord
}
val result = data.map{
result => result.foreach(record => file_Object.write(record))
}
result.onComplete{
case Success(value) => {
println("Success")
file_Object.close()
}
case Failure(e) => e.printStackTrace()
}
}
}
I am facing the following issues:
When I am running the program using SBT it is writing results to the file but not terminating as going in infinite mode.
[info] Loading project definition from /Users/cw0155/PersonalProjects/datagen/project
[info] Loading settings for project datagen from build.sbt ...
[info] Set current project to datagenerator (in build file:/Users/cw0155/PersonalProjects/datagen/)
[info] running com.generator.DataGenerator xyz.csv 100
Success
| => datagen / Compile / runMain 255s
When I am running the program using Jar as:
scala -cp target/scala-2.13/datagenerator_2.13-0.1.jar com.generator.DataGenerator "pqr.csv" "1000"
It is waiting infinite time and not writing to the file.
Any help is much appreciated :)
Try this version
bar.scala
import scala.concurrent.{Await, Future, ExecutionContext}
import scala.concurrent.duration._
import scala.util.{Success, Failure}
import ExecutionContext.Implicits.global
import java.io.FileWriter
object bar {
def getRecord: String = "Random string\n"
def main(args: Array[String]): Unit = {
val filename = args(0)
val number_of_records = args(1)
val data: Future[Iterable[String]] = Future {
for (i <- 1 to number_of_records.toInt)
yield getRecord
}
val file_Object = new FileWriter(filename, true)
val result = data.map( r => r.foreach(record => file_Object.write(record)) )
result.onComplete {
case Success(value) =>
println("Success")
file_Object.close()
case Failure(e) =>
e.printStackTrace()
}
Await.result( result, 10.second )
}
}
Your original version gave me the expected output when I ran it like so
bash-3.2$ scala bar.scala /dev/fd/1 10
Success
Random string
Random string
Random string
Random string
Random string
Random string
Random string
Random string
Random string
Random string
However without the Await.result your program can exit before the future finishes.

Build dynamic UPDATE query in Slick 3

I am looking for a way to generate an UPDATE query over multiple columns that are only known at runtime.
For instance, given a List[(String, Int)], how would I go about generating a query in the form of UPDATE <table> SET k1=v1, k2=v2, kn=vn for all key/value pairs in the list?
I have found that, given a single key/value pair, a plain SQL query can be built as sqlu"UPDATE <table> SET #$key=$value (where the key is from a trusted source to avoid injection), but I've been unsuccessful in generalizing this to a list of updates without running a query for each.
Is this possible?
This is one way to do it. I create a table definition T here with table and column names (TableDesc) as implicit arguments. I would have thought that it should be possible to set them explicitly, but I couldn't find it. For the example a create to table query instances, aTable and bTable. Then I insert and select some values and in the end I update a value in the bTable.
import slick.driver.H2Driver.api._
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration.Duration
import scala.util.{Failure, Success}
val db = Database.forURL("jdbc:h2:mem:test1;DB_CLOSE_DELAY=-1", "sa", "", null, "org.h2.Driver")
case class TableDesc(tableName: String, intColumnName: String, stringColumnName: String)
class T(tag: Tag)(implicit tableDesc: TableDesc) extends Table[(String, Int)](tag, tableDesc.tableName) {
def stringColumn = column[String](tableDesc.intColumnName)
def intColumn = column[Int](tableDesc.stringColumnName)
def * = (stringColumn, intColumn)
}
val aTable = {
implicit val tableDesc = TableDesc("TABLE_A", "sa", "ia")
TableQuery[T]
}
val bTable = {
implicit val tableDesc = TableDesc("TABLE_B", "sb", "ib")
TableQuery[T]
}
val future = for {
_ <- db.run(aTable.schema.create)
_ <- db.run(aTable += ("Hi", 1))
resultA <- db.run(aTable.result)
_ <- db.run(bTable.schema.create)
_ <- db.run(bTable ++= Seq(("Test1", 1), ("Test2", 2)))
_ <- db.run(bTable.filter(_.stringColumn === "Test1").map(_.intColumn).update(3))
resultB <- db.run(bTable.result)
} yield (resultA, resultB)
Await.result(future, Duration.Inf)
future.onComplete {
case Success(a) => println(s"OK $a")
case Failure(f) => println(s"DOH $f")
}
Thread.sleep(500)
I've got the sleep statement in the end to assert that the Future.onComplete gets time to finish before the application ends. Is there any other way?

Scala / Slick 3.0.1 - Update Multiple Columns

Whenever I get an update request for a given id , I am trying to update the masterId and the updatedDtTm columns in a DB table( I don't want to update my createdDtTm). The following is my code :
case class Master(id:Option[Long] = None,masterId:String,createdDtTm:Option[java.util.Date],
updatedDtTm:Option[java.util.Date])
/**
* This is my Slick Mapping table
* with the default projection
*/
`class MappingMaster(tag:Tag) extends
Table[Master](tag,"master") {
implicit val DateTimeColumnType = MappedColumnType.base[java.util.Date, java.sql.Timestamp](
{
ud => new Timestamp(ud.getTime)
}, {
sd => new java.util.Date(sd.getTime)
})
def id = column[Long]("id",O.PrimaryKey,O.AutoInc)
def masterId = column[String]("master_id")
def createdDtTm = column[java.util.Date]("created_dttm")
def updatedDtTm = column[java.util.Date]("updated_dttm")
def * = (id.? , masterId , createdDtTm.? , updatedDtTm.?) <>
((Master.apply _).tupled , Master.unapply _) }
/**
* Some where in the DAO update call
*/
db.run(masterRecords.filter(_.id === id).map(rw =>(rw.masterId,rw.updatedDtTm)).
update(("new_master_id",new Date()))
// I also tried the following
db.run(masterRecords.filter(_id === id).map(rw => (rw.masterId,rw.updatedDtTm).shaped[(String,java.util.Date)]).update(("new_master_id",new Date()))
The documentation of Slick states that inorder to update multiple columns one needs to use the map to get the corresponding columns and then update them.
The problem here is the following - the update method seems to be accepting a value of Nothing.
I also tried the following which was doing the same thing as above:
val t = for {
ms <- masterRecords if (ms.id === "1234")
} yield (ms.masterId , ms.updateDtTm)
db.run(t.update(("new_master_id",new Date())))
When I compile the code , it gives me the following Compilation Exception :
Slick does not know how to map the given types.
[error] Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported type in a Query (e.g. scala List).
[error] Required level: slick.lifted.FlatShapeLevel
[error] Source type: (slick.lifted.Rep[String], slick.lifted.Rep[java.util.Date])
[error] Unpacked type: (String, java.util.Date)
[error] Packed type: Any
[error] db.run(masterRecords.filter(_id === id).map(rw => (rw.masterId,rw.updatedDtTm).shaped[(String,java.util.Date)]).update(("new_master_id",new Date()))
I am using Scala 2.11 with Slick 3.0.1 and IntelliJ as the IDE. Really appreciate if you can throw some light on this.
Cheers,
Sathish
(Replaces original answer) It seems the implicit has to be in scope for the queries, this compiles:
case class Master(id:Option[Long] = None,masterId:String,createdDtTm:Option[java.util.Date],
updatedDtTm:Option[java.util.Date])
implicit val DateTimeColumnType = MappedColumnType.base[java.util.Date, java.sql.Timestamp](
{
ud => new Timestamp(ud.getTime)
}, {
sd => new java.util.Date(sd.getTime)
})
class MappingMaster(tag:Tag) extends Table[Master](tag,"master") {
def id = column[Long]("id",O.PrimaryKey,O.AutoInc)
def masterId = column[String]("master_id")
def createdDtTm = column[java.util.Date]("created_dttm")
def updatedDtTm = column[java.util.Date]("updated_dttm")
def * = (id.? , masterId , createdDtTm.? , updatedDtTm.?) <> ((Master.apply _).tupled , Master.unapply _)
}
private val masterRecords = TableQuery[MappingMaster]
val id: Long = 123
db.run(masterRecords.filter(_.id === id).map(rw =>(rw.masterId,rw.updatedDtTm)).update("new_master_id",new Date()))
val t = for {
ms <- masterRecords if (ms.id === id)
} yield (ms.masterId , ms.updatedDtTm)
db.run(t.update(("new_master_id",new Date())))