I have a Scala class that accesses a database through JDBC:
class DataAccess {
def select = {
val driver = "com.mysql.jdbc.Driver"
val url = "jdbc:mysql://localhost:3306/db"
val username = "root"
val password = "xxxx"
var connection:Connection = null
try {
// make the connection
Class.forName(driver)
connection = DriverManager.getConnection(url, username, password)
// create the statement, and run the select query
val statement = connection.createStatement()
val resultSet = statement.executeQuery("SELECT name, descrip FROM table1")
while ( resultSet.next() ) {
val name = resultSet.getString(1)
val descrip = resultSet.getString(2)
println("name, descrip = " + name + ", " + descrip)
}
} catch {
case e => e.printStackTrace
}
connection.close()
}
}
I access this class in my Play application, like so:
def testSql = Action {
val da = new DataAccess
da.select()
Ok("success")
}
The method testSql may be invoked by several users. Question is: could there be a race condition in the while ( resultSet.next() ) loop (or in any other part of the class)?
Note: I need to use JDBC as the SQL statement will be dynamic.
No there cannot.
Each thread is working with a distinct local instance of ResultSet so there cannot be concurrent access to the same object.
Related
I am trying to write a scala-jdbc program which will run an analyze statement on tables present on our database. To do that, I wrote the code as below.
object Trip {
def main(args: Array[String]): Unit = {
val gs = new GetStats(args(0))
gs.run_analyze()
}
}
-----------------------------------------------------------------
class GetStats {
var tables = ""
def this(tables:String){
this
this.tables = tables
}
def run_analyze():Unit = {
val tabList = tables.split(",")
val gpc = new GpConnection()
val con = gpc.getGpCon()
val statement = con.get.createStatement()
try {
for(t<-tabList){
val rs = statement.execute(s"analyze ${t}")
if(rs.equals(true)) println(s"Analyzed ${t}")
else println(s"Analyze failed ${t}")
}
} catch {
case pse:PSQLException => pse.printStackTrace()
case e:Exception => e.printStackTrace()
}
}
}
-----------------------------------------------------------------
class GpConnection {
var gpCon: Option[Connection] = None
def getGpCon():Option[Connection] = {
val url = "jdbc:postgresql://.."
val driver = "org.postgresql.Driver"
val username = "user"
val password = "1239876"
Class.forName(driver)
if(gpCon==None || gpCon.get.isClosed) {
gpCon = DriverManager.getConnection(url, username, password).asInstanceOf[Option[Connection]]
gpCon
} else gpCon
}
}
I create a jar file on my idea (IntelliJ) and submit the jar as below.
scala -cp /home/username/jars/postgresql-42.1.4.jar analyzetables_2.11-0.1.jar schema.table
When I submit the jar file, I see the exception ClassCastException as given below.
java.lang.ClassCastException: org.postgresql.jdbc.PgConnection cannot be cast to scala.Option
at com.db.manager.GpConnection.getGpCon(GpConnection.scala:15)
at com.gp.analyze.GetStats.run_analyze(GetStats.scala:19)
at com.runstats.Trip$.main(Trip.scala:8)
at com.runstats.Trip.main(Trip.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.reflect.internal.util.ScalaClassLoader.$anonfun$run$2(ScalaClassLoader.scala:98)
at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:32)
at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:30)
at scala.reflect.internal.util.ScalaClassLoader.run$(ScalaClassLoader.scala:98)
at scala.reflect.internal.util.ScalaClassLoader.run(ScalaClassLoader.scala:90)
at scala.tools.nsc.CommonRunner.run$(ObjectRunner.scala:22)
The exception says that connection cannot be casted to scala.option but if I don't use Option, I cannot use null to initialize the connection object & I see NullPointerException when I run the code.
Could anyone let me know what is the mistake I am making here and how can I fix it ?
asInstanceOf[] doesn't work that way. It won't just create an Option[] for you.
val x:Option[Int] = 5.asInstanceOf[Option[Int]] //not gonna happen
You have to create the Option[] explicitly.
val x:Option[Int] = Option(5)
You can use an uninitialized var as:
var gpCon: Connection = _
But since you are using scala.util.Option which is a better thing to do, do it in a functional way and don't write imperative Java code in Scala, like:
// a singleton object (Scala provided)
object GpConnection {
private var gpCon: Option[Connection] = None
// returns a Connection (no option - as we need it!)
def getOrCreateCon(): Connection = gpCon match {
case conOpt if conOpt.isEmpty || conOpt.get.isClosed =>
// connection not present or is closed
val url = "jdbc:postgresql://.."
val driver = "org.postgresql.Driver"
val username = "user"
val password = "1239876"
// may throw an exception - you can even handle this
Class.forName(driver)
// may throw an exception - you can even handle this
gpCon = Option(DriverManager.getConnection(url, username, password).asInstanceOf[Connection])
gpCon.getOrElse(throw new RuntimeException("Cannot create connection"))
case Some(con) => con
}
}
use it like:
val con = GpConnection.getOrCreateCon
class Employee(tag: Tag) extends Table[table_types.user](tag, "EMPLOYEE") {
def employeeID = column[Int]("EMPLOYEE_ID")
def empName = column[String]("NAME")
def startDate = column[String]("START_DATE")
def * = (employeeID, empName, startDate)
}
object employeeHandle {
def insert(emp:Employee):Future[Any] = {
val dao = new SlickPostgresDAO
val db = dao.db
val insertdb = DBIO.seq(employee += (emp))
db.run(insertdb)
}
}
Insert into database a million employee records
object Hello extends App {
val employees = List[*1 million employee list*]
for(employee<-employees) {
employeeHandle.insert(employee)
*Code to connect to rest api to confirm entry*
}
}
However when I run the above code I soon run out of connections to Postgres. How can I do it in parallel (in a non blocking way) but at the same time ensure I don't run out of connections to postgres.
I think you don't need to do it in parallel; I don't see how it can solve it. Instead you could simply create connection before you start that loop and pass it to employeeHandle.insert(db, employee).
Something like (I don't know scala):
object Hello extends App {
val dao = new SlickPostgresDAO
val db = dao.db
val employees = List[*1 million employee list*]
for(employee<-employees) {
employeeHandle.insert(db, employee)
*Code to connect to rest api to confirm entry*
}
}
Almost all examples of slick insert I have come across uses blocking to fullfil the results. It would be nice to have one that doesn't.
My take on it:
object Hello extends App {
val employees = List[*1 million employee list*]
val groupedList = employees.grouped(10).toList
insertTests()
def insertTests(l: List[List[Employee]] = groupedList): Unit = {
val ins = l.head
val futures = ins.map { no => employeeHandle.insert(employee)}
val seq = Future.sequence(futures)
Await.result(seq, Duration.Inf)
if(l.nonEmpty) insertTests(l.tail)
}
}
Also the connection parameter in insert handle should be outside
object employeeHandle {
val dao = new SlickPostgresDAO
val db = dao.db
def insert(emp:Employee):Future[Any] = {
val insertdb = DBIO.seq(employee += (emp))
db.run(insertdb)
}
}
Doobie
Without much experience in either Scala or Doobie I am trying to select data from a DB2 database. Following query works fine and prints as expected 5 employees.
import doobie.imports._, scalaz.effect.IO
object ScalaDoobieSelect extends App {
val urlPrefix = "jdbc:db2:"
val schema = "SCHEMA"
val obdcName = "ODBC"
val url = urlPrefix + obdcName + ":" +
"currentSchema=" + schema + ";" +
"currentFunctionPath=" + schema + ";"
val driver = "com.ibm.db2.jcc.DB2Driver"
val username = "username"
val password = "password"
implicit val han = LogHandler.jdkLogHandler // (ii)
val xa = DriverManagerTransactor[IO](
driver, url, username, password
)
case class User(id: String, name: String)
def find(): ConnectionIO[List[User]] =
sql"SELECT ID, NAME FROM EMPLOYEE FETCH FIRST 10 ROWS ONLY"
.query[User]
.process
.take(5) // (i)
.list
find()
.transact(xa)
.unsafePerformIO
.foreach(e => println("ID = %s, NAME = %s".format(e.id, e.name)))
}
Issue
When I want to read all selected rows and remove take(5), so I have .process.list instead of .process.take(5).list, I get following error. (i)
com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][3.64.133] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
I am wondering what take(5) changes that it does not return an error. To get more information about the invalid operation, I have tried to enable logging. (ii) Unfortunately, logging is not supported for streaming. How can I get more information about what operation causes this error?
Plain JDBC
Below, in my opinion equivalent, plain JDBC query works as expected and returns all 10 rows.
import java.sql.{Connection,DriverManager}
object ScalaJdbcConnectSelect extends App {
val urlPrefix = "jdbc:db2:"
val schema = "SCHEMA"
val obdcName = "ODBC"
val url = urlPrefix + obdcName + ":" +
"currentSchema=" + schema + ";" +
"currentFunctionPath=" + schema + ";"
val driver = "com.ibm.db2.jcc.DB2Driver"
val username = "username"
val password = "password"
var connection:Connection = _
try {
Class.forName(driver)
connection = DriverManager.getConnection(url, username, password)
val statement = connection.createStatement
val rs = statement.executeQuery(
"SELECT ID, NAME FROM EMPLOYEE FETCH FIRST 10 ROWS ONLY"
)
while (rs.next) {
val id = rs.getString("ID")
val name = rs.getString("NAME")
println("ID = %s, NAME = %s".format(id,name))
}
} catch {
case e: Exception => e.printStackTrace
}
connection.close
}
Environment
As can be seen in the error message, I am using db2jcc.jar version 3.64.133. DB2 is used in version 11.
I am iteratively querying a mysql table called txqueue that is growing continuously.
Each successive query only considers rows that were inserted into the txqueue table after the query executed in the previous iteration.
To achieve this, each successive query selects rows from the table where the primary key (seqno field in my example below) exceeds the maximum seqno observed in the previous query.
Any newly inserted rows identified in this way are written into a csv file.
The intention is for this process to run indefinitely.
The tail recursive function below works OK, but after a while it runs into a java.lang.StackOverflowError. The results of each iterative query contains two to three rows and results are returned every second or so.
Any ideas on how to avoid the java.lang.StackOverflowError?
Is this actually something that can/should be achieved with streaming?
Many thanks for any suggestions.
Here's the code that works for a while:
object TXQImport {
val driver = "com.mysql.jdbc.Driver"
val url = "jdbc:mysql://mysqlserveraddress/mysqldb"
val username = "username"
val password = "password"
var connection:Connection = null
def txImportLoop(startID : BigDecimal) : Unit = {
try {
Class.forName(driver)
connection = DriverManager.getConnection(url, username, password)
val statement = connection.createStatement()
val newMaxID = statement.executeQuery("SELECT max(seqno) as maxid from txqueue")
val maxid = new Iterator[BigDecimal] {
def hasNext = newMaxID.next()
def next() = newMaxID.getBigDecimal(1)
}.toStream.max
val selectStatement = statement.executeQuery("SELECT seqno,someotherfield " +
" from txqueue where seqno >= " + startID + " and seqno < " + maxid)
if(startID != maxid) {
val ts = System.currentTimeMillis
val file = new java.io.File("F:\\txqueue " + ts + ".txt")
val bw = new BufferedWriter(new FileWriter(file))
// Iterate Over ResultSet
while (selectStatement.next()) {
bw.write(selectStatement.getString(1) + "," + selectStatement.getString(2))
bw.newLine()
}
bw.close()
}
connection.close()
txImportLoop(maxid)
}
catch {
case e => e.printStackTrace
}
}
def main(args: Array[String]) {
txImportLoop(0)
}
}
Your function is not tail-recursive (because of the catch in the end).
That's why you end up with stack overflow.
You should always annotate the functions you intend to be tail-recursive with #scala.annotation.tailrec - it will fail compilation in case tail recursion is impossible, so that you won't be surprised by it at run time.
Is there a way to have Slick's code generation generate code for only a single schema? Say, public? I have extensions that create a whole ton of tables (eg postgis, pg_jobman) that make the code that slick generates gigantic.
Use this code with appropriate values and schema name,
object CodeGenerator {
def outputDir :String =""
def pkg:String =""
def schemaList:String = "schema1, schema2"
def url:String = "dburl"
def fileName:String =""
val user = "dbUsername"
val password = "dbPassword"
val slickDriver="scala.slick.driver.PostgresDriver"
val JdbcDriver = "org.postgresql.Driver"
val container = "Tables"
def generate() = {
val driver: JdbcProfile = buildJdbcProfile
val schemas = createSchemaList
var model = createModel(driver,schemas)
val codegen = new SourceCodeGenerator(model){
// customize Scala table name (table class, table values, ...)
override def tableName = dbTableName => dbTableName match {
case _ => dbTableName+"Table"
}
override def code = {
//imports is copied right out of
//scala.slick.model.codegen.AbstractSourceCodeGenerator
val imports = {
"import scala.slick.model.ForeignKeyAction\n" +
(if (tables.exists(_.hlistEnabled)) {
"import scala.slick.collection.heterogenous._\n" +
"import scala.slick.collection.heterogenous.syntax._\n"
} else ""
) +
(if (tables.exists(_.PlainSqlMapper.enabled)) {
"import scala.slick.jdbc.{GetResult => GR}\n" +
"// NOTE: GetResult mappers for plain SQL are only generated for tables where Slick knows how to map the types of all columns.\n"
} else ""
) + "\n\n" //+ tables.map(t => s"implicit val ${t.model.name.table}Format = Json.format[${t.model.name.table}]").mkString("\n")+"\n\n"
}
val bySchema = tables.groupBy(t => {
t.model.name.schema
})
val schemaFor = (schema: Option[String]) => {
bySchema(schema).sortBy(_.model.name.table).map(
_.code.mkString("\n")
).mkString("\n\n")
}
}
val joins = tables.flatMap( _.foreignKeys.map{ foreignKey =>
import foreignKey._
val fkt = referencingTable.TableClass.name
val pkt = referencedTable.TableClass.name
val columns = referencingColumns.map(_.name) zip
referencedColumns.map(_.name)
s"implicit def autojoin${fkt + name.toString} = (left:${fkt} ,right:${pkt}) => " +
columns.map{
case (lcol,rcol) =>
"left."+lcol + " === " + "right."+rcol
}.mkString(" && ")
})
override def entityName = dbTableName => dbTableName match {
case _ => dbTableName
}
override def Table = new Table(_) {
table =>
// customize table value (TableQuery) name (uses tableName as a basis)
override def TableValue = new TableValue {
override def rawName = super.rawName.uncapitalize
}
// override generator responsible for columns
override def Column = new Column(_){
// customize Scala column names
override def rawName = (table.model.name.table,this.model.name) match {
case _ => super.rawName
}
}
}
}
println(outputDir+"\\"+fileName)
(new File(outputDir)).mkdirs()
val fw = new FileWriter(outputDir+File.separator+fileName)
fw.write(codegen.packageCode(slickDriver, pkg, container))
fw.close()
}
def createModel(driver: JdbcProfile, schemas:Set[Option[String]]): Model = {
driver.simple.Database
.forURL(url, user = user, password = password, driver = JdbcDriver)
.withSession { implicit session =>
val filteredTables = driver.defaultTables.filter(
(t: MTable) => schemas.contains(t.name.schema)
)
PostgresDriver.createModel(Some(filteredTables))
}
}
def createSchemaList: Set[Option[String]] = {
schemaList.split(",").map({
case "" => None
case (name: String) => Some(name)
}).toSet
}
def buildJdbcProfile: JdbcProfile = {
val module = currentMirror.staticModule(slickDriver)
val reflectedModule = currentMirror.reflectModule(module)
val driver = reflectedModule.instance.asInstanceOf[JdbcProfile]
driver
}
}
I encountered the same problem and I found this question. The answer by S.Karthik sent me in the right direction. However, the code in the answer is slightly outdated. And I think a bit over-complicated. So I crafted my own solution:
import slick.codegen.SourceCodeGenerator
import slick.driver.JdbcProfile
import slick.model.Model
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, ExecutionContext}
val slickDriver = "slick.driver.PostgresDriver"
val jdbcDriver = "org.postgresql.Driver"
val url = "jdbc:postgresql://localhost:5432/mydb"
val outputFolder = "/path/to/src/test/scala"
val pkg = "com.mycompany"
val user = "user"
val password = "password"
object MySourceCodeGenerator {
def run(slickDriver: String, jdbcDriver: String, url: String, outputDir: String,
pkg: String, user: Option[String], password: Option[String]): Unit = {
val driver: JdbcProfile =
Class.forName(slickDriver + "$").getField("MODULE$").get(null).asInstanceOf[JdbcProfile]
val dbFactory = driver.api.Database
val db = dbFactory.forURL(url, driver = jdbcDriver, user = user.orNull,
password = password.orNull, keepAliveConnection = true)
try {
// **1**
val allSchemas = Await.result(db.run(
driver.createModel(None, ignoreInvalidDefaults = false)(ExecutionContext.global).withPinnedSession), Duration.Inf)
// **2**
val publicSchema = new Model(allSchemas.tables.filter(_.name.schema.isEmpty), allSchemas.options)
// **3**
new SourceCodeGenerator(publicSchema).writeToFile(slickDriver, outputDir, pkg)
} finally db.close
}
}
MySourceCodeGenerator.run(slickDriver, jdbcDriver, url, outputFolder, pkg, Some(user), Some(password))
I'll explain what's going on here:
I copied the run function from the SourceCodeGenerator class that's in the slick-codegen library. (I used version slick-codegen_2.10-3.1.1.)
// **1**: In the origninal code, the generated Model was referenced in a val called m. I renamed that to allSchemas.
// **2**: I created a new Model (publicSchema), using the options from the original model, and using a filtered version of the tables set from the original model. It turns out tables from the public schema don't get a schema name in the model. Hence the isEmpty. Should you need tables from one or more other schemas, you can easily create a different filter expression.
// **3**: I create a SourceCodeGenerator with the created publicSchema model.
Of course, it would even be better if the Slick codegenerator could incorporate an option to select one or more schemas.