I am iteratively querying a mysql table called txqueue that is growing continuously.
Each successive query only considers rows that were inserted into the txqueue table after the query executed in the previous iteration.
To achieve this, each successive query selects rows from the table where the primary key (seqno field in my example below) exceeds the maximum seqno observed in the previous query.
Any newly inserted rows identified in this way are written into a csv file.
The intention is for this process to run indefinitely.
The tail recursive function below works OK, but after a while it runs into a java.lang.StackOverflowError. The results of each iterative query contains two to three rows and results are returned every second or so.
Any ideas on how to avoid the java.lang.StackOverflowError?
Is this actually something that can/should be achieved with streaming?
Many thanks for any suggestions.
Here's the code that works for a while:
object TXQImport {
val driver = "com.mysql.jdbc.Driver"
val url = "jdbc:mysql://mysqlserveraddress/mysqldb"
val username = "username"
val password = "password"
var connection:Connection = null
def txImportLoop(startID : BigDecimal) : Unit = {
try {
Class.forName(driver)
connection = DriverManager.getConnection(url, username, password)
val statement = connection.createStatement()
val newMaxID = statement.executeQuery("SELECT max(seqno) as maxid from txqueue")
val maxid = new Iterator[BigDecimal] {
def hasNext = newMaxID.next()
def next() = newMaxID.getBigDecimal(1)
}.toStream.max
val selectStatement = statement.executeQuery("SELECT seqno,someotherfield " +
" from txqueue where seqno >= " + startID + " and seqno < " + maxid)
if(startID != maxid) {
val ts = System.currentTimeMillis
val file = new java.io.File("F:\\txqueue " + ts + ".txt")
val bw = new BufferedWriter(new FileWriter(file))
// Iterate Over ResultSet
while (selectStatement.next()) {
bw.write(selectStatement.getString(1) + "," + selectStatement.getString(2))
bw.newLine()
}
bw.close()
}
connection.close()
txImportLoop(maxid)
}
catch {
case e => e.printStackTrace
}
}
def main(args: Array[String]) {
txImportLoop(0)
}
}
Your function is not tail-recursive (because of the catch in the end).
That's why you end up with stack overflow.
You should always annotate the functions you intend to be tail-recursive with #scala.annotation.tailrec - it will fail compilation in case tail recursion is impossible, so that you won't be surprised by it at run time.
Related
I have a dataframe in spark and I need to process a particular column in that dataframe using a REST API. The API does some transformation to a string and returns a result string. The API can process multiple strings at a time.
I can iterate over the columns of the dataframe, collect n values of the column in a batch and call the api and then add it back to the dataframe, and continue with the next batch. But this seems like the normal way of doing it without taking advantage of spark.
Is there a better way to do this which can take advantage of spark sql optimiser and spark parallel processing?
For Spark parallel processing you can use mapPartitions
case class Input(col: String)
case class Output ( col : String,new_col : String )
val data = spark.read.csv("/a/b/c").as[Input].repartiton(n)
def declare(partitions: Iterator[Input]): Iterator[Output] ={
val url = ""
implicit val formats: DefaultFormats.type = DefaultFormats
var list = new ListBuffer[Output]()
val httpClient =
try {
while (partitions.hasNext) {
val x = partitions.next()
val col = x.col
val concat_url =""
val apiResp = HttpClientAcceptSelfSignedCertificate.call(httpClient, concat_url)
if (apiResp.isDefined) {
val json = parse(apiResp.get)
val new_col = (json \\"value_to_take_from_api").children.head.values.toString
val output = Output(col,new_col)
list+=output
}
else {
val new_col = "Not Found"
val output = Output(col,new_col)
list+=output
}
}
} catch {
case e: Exception => println("api Exception with : " + e.getMessage)
}
finally {
HttpClientAcceptSelfSignedCertificate.close(httpClient)
}
list.iterator
}
val dd:Dataset[Output] =data.mapPartitions(x=>declare(x))
def displayTable(table: TableQuery[Restaurants]): Unit = {
val tablequery = table.map(_.id)
val action = tablequery.result
val result = db.run(action)
result.foreach(id => id.foreach(new_id => println(new_id)))
total_points = total_points + 10
}
I have tried to print out all the data to the screen but I have gotten no where. My question is why does nothing print out. I am using Scala and JDBC connection aka Slick. If you remove new_id => println(new_id), you get:
def displayTable(table: TableQuery[Restaurants]): Unit = {
val tablequery = table.map(_.id)
val action = tablequery.result
val result = db.run(action)
result.foreach(id => println(id))
total_points = total_points + 10
}
This code produces an out put like the following: "Vector()". Can someone please help me print out all the data out? I loaded it in using the following code:
def fillTable(): TableQuery[Restaurants] ={
println("Table filled.")
val restaurants = TableQuery[Restaurants]
val setup = DBIO.seq(
restaurants.schema.create
)
val setupFuture = db.run(setup)
val bufferedSource = io.Source.fromFile("src/main/scala/Restaurants.csv")
for (line <- bufferedSource.getLines) {
val cols = line.split(",").map(_.trim)
var restaurant = new Restaurant(s"${cols(0)}",s"${cols(1)}", s"${cols(2)}",
s"${cols(3)}", s"${cols(4)}",s"${cols(5)}",s"${cols(6)}",
s"${cols(7)}",s"${cols(8)}",s"${cols(9)}")
restaurants.forceInsert(s"${cols(0)}",s"${cols(1)}", s"${cols(2)}",
s"${cols(3)}", s"${cols(4)}",s"${cols(5)}",s"${cols(6)}",
s"${cols(7)}",s"${cols(8)}",s"${cols(9)}")
total_rows = total_rows + 1
This is my first question so I apologize for the format.
The fact that Vector() is your output in the second version of displayTable is a strong hint that your query is returning an empty result, and therefore has no id's to print out. I haven't run your code myself, but I suspect this is because restaurants.forceInsert returns an action, and you need to db.run() it to actually execute the query.
I'm also curious why you create var restaurant = ... but then ignore it, and call forceInsert recreating the tuple from the csv values again. Why not restaurants.forceInsert(restaurant)?
I am trying to create batches of rows of Dataset in Spark.
For maintaining the number of records sent to service I want to batch the items so that i can maintain the rate at which the data will be sent.
For,
case class Person(name:String, address: String)
case class PersonBatch(personBatch: List[Person])
For a given Dataset[Person] I want to create Dataset[PersonBatch]
For example if input Dataset[Person] has 100 records the output Dataset should be like Dataset[PersonBatch] where every PersonBatchshould be list of n records (Person).
I have tried this but it din't work.
object DataBatcher extends Logger {
var batchList: ListBuffer[PersonBatch] = ListBuffer[PersonBatch]()
var batchSize: Long = 500 //default batch size
def addToBatchList(batch: PersonBatch): Unit = {
batchList += batch
}
def clearBatchList(): Unit = {
batchList.clear()
}
def createBatches(ds: Dataset[Person]): Dataset[PersonBatch] = {
val dsCount = ds.count()
logger.info(s"Count of dataset passed for creating batches : ${dsCount}")
val batchElement = ListBuffer[Person]()
val batch = PersonBatch(batchElement)
ds.foreach(x => {
batch.personBatch += x
if(batch.personBatch.length == batchSize) {
addToBatchList(batch)
batch.requestBatch.clear()
}
})
if(batch.personBatch.length > 0) {
addToBatchList(batch)
batch.personBatch.clear()
}
sparkSession.createDataset(batchList)
}
}
I want to run this job on Hadoop cluster.
Can some help me with this ?
rdd.iterator has grouped function may be useful for you.
for example :
iter.grouped(batchSize)
Sample code snippet which does batch insert with iter.grouped(batchsize) here its 1000 and Im trying to insert in to database
df.repartition(numofpartitionsyouwant) // numPartitions ~ number of simultaneous DB connections you can planning to give...
def insertToTable(sqlDatabaseConnectionString: String,
sqlTableName: String): Unit = {
val tableHeader: String = dataFrame.columns.mkString(",")
dataFrame.foreachPartition { partition =>
//NOTE : EACH PARTITION ONE CONNECTION (more better way is to use connection pools)
val sqlExecutorConnection: Connection =
DriverManager.getConnection(sqlDatabaseConnectionString)
//Batch size of 1000 is used since some databases cant use batch size more than 1000 for ex : Azure sql
partition.grouped(1000).foreach { group =>
val insertString: scala.collection.mutable.StringBuilder =
new scala.collection.mutable.StringBuilder()
group.foreach { record =>
insertString.append("('" + record.mkString(",") + "'),")
}
sqlExecutorConnection
.createStatement()
.executeUpdate(f"INSERT INTO [$sqlTableName] ($tableHeader) VALUES "
+ insertString.stripSuffix(","))
}
sqlExecutorConnection.close() // close the connection so that connections wont exhaust.
}
}
val tableHeader: String = dataFrame.columns.mkString(",")
dataFrame.foreachPartition((it: Iterator[Row]) => {
println("partition index: " )
val url = "jdbc:...+ "user=;password=;"
val conn = DriverManager.getConnection(url)
conn.setAutoCommit(true)
val stmt = conn.createStatement()
val batchSize = 10
var i =0
while (it.hasNext) {
val row = it.next
import java.sql.SQLException
import java.sql.SQLIntegrityConstraintViolationException
try {
stmt.addBatch(" UPDATE TABLE SET STATUS = 0 , " +
" DATE ='" + new java.sql.Timestamp(System.currentTimeMillis()) +"'" +
" where id = " + row.getAs("IDNUM") )
i += 1
if ( i % batchSize == 0 ) {
stmt.executeBatch
conn.commit
}
} catch {
case e: SQLIntegrityConstraintViolationException =>
case e: SQLException =>
e.printStackTrace()
}
finally {
stmt.executeBatch
conn.commit
}
}
import java.util
val ret = stmt.executeBatch
System.out.println("Ret val: " + util.Arrays.toString(ret))
System.out.println("Update count: " + stmt.getUpdateCount)
conn.commit
stmt.close
I'm trying to move data from rdd to postgres table, using:
def copyIn(reader: java.io.Reader, columnStmt: String = "") = {
//connect to postgres database on the localhost
val driver = "org.postgresql.Driver"
var connection:Connection = null
Class.forName(driver)
connection = DriverManager.getConnection()
try {
connection.unwrap(classOf[PGConnection]).getCopyAPI.copyIn(s"COPY my_table ($columnStmt) FROM STDIN WITH CSV", reader)
} catch {
case se: SQLException => println(se.getMessage)
case t: Throwable => println(t.getMessage)
} finally {
connection.close()
}
}
myRdd.mapPartitions(iter => {
val sb = new StringBuilder()
var n_iter = iter.map(row => {
val mapRequest = Utils.getMyRowMap(myMap, row)
sb.append(mapRequest.values.mkString(", ")).append("\n")
})
copyIn(new StringReader(sb.toString), geoSelectMap.keySet.mkString(", "))
sb.clear
n_iter
}).collect
The script keep getting in to the CopyIn function with no data to insert in. I think it maybe because iter.map just map the partition and do not perform collect? I try to collect te whole myRdd object and still didnt get data in copyIn function.
How can I iterate over an rdd and get the StringBuilder appended and why the snippet above doesn't work?
Anybody have a clue?
iter is an Iterator. So iter.map creates a new Iterator, but you don't actually iterate it and it does nothing. You probably want foreach instead. Except then iter will be empty by the time you return it and the result of collect will be an empty RDD.
The actual method you want is foreachPartition:
myRdd.foreachPartition(iter => {
val sb = new StringBuilder()
iter.foreach(row => {
val mapRequest = Utils.getMyRowMap(myMap, row)
sb.append(mapRequest.values.mkString(", ")).append("\n")
})
copyIn(new StringReader(sb.toString), geoSelectMap.keySet.mkString(", "))
sb.clear
})
and then myRdd.collect if you want to collect it as well. (Persist myRdd if you want to use it twice without recalculating it.)
While inserting thousands of records per five seconds through batch insert in slick 3 I am getting
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
My data access layer looks like :
val db: CustomPostgresDriver.backend.DatabaseDef = Database.forURL(url, user=user, password=password, driver= jdbcDriver)
override def insertBatch(rowList: List[T#TableElementType]): Future[Long] = {
val res = db.run(insertBatchQuery(rowList)).map(_.head.toLong).recover{ case ex:Throwable=> RelationalRepositoryUtility.handleBatchOperationErrors(ex)}
//db.close()
res
}
override def insertBatchQuery(rowList: List[T#TableElementType]): FixedSqlAction[Option[Int], NoStream, Write] = {
query ++= (rowList)
}
closing the connection in insert batch has no effect...it still gives the same error.
I am calling insert batch from my code like this :
val temp1 = list1.flatMap { li =>
Future.sequence(li.map { trip =>
val data = for {
tripData <- TripDataRepository.insertQuery( trip.tripData)
subTripData <- SubTripDataRepository.insertBatchQuery(getUpdatedSubTripDataList(trip.subTripData, tripData.id))
} yield ((tripData, subTripData))
val res=db.run(data.transactionally)
res
//db.close()
})
}
if i close the connection after my work here as you can see in commented code i get error :
java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#6c3ae2b6 rejected from java.util.concurrent.ThreadPoolExecutor#79d2d4eb[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
After calling the method without Future.sequence like this :
val temp1 =list.map { trip =>
val data = for {
tripData <- TripDataRepository.insertQuery( trip.tripData)
subTripData <- SubTripDataRepository.insertBatchQuery(getUpdatedSubTripDataList(trip.subTripData, tripData.id))
} yield ((tripData, subTripData))
val res=db.run(data.transactionally)
res
}
I still got too many clients error...
The root of this problem is that you are spinning up an unbounded list of Future simultaneously, each connecting to the database - one per entry in list.
This can be solved by running your inserts in serial, forcing each insert batch to depend on the previous:
// Empty Future for the results. Replace Unit with the correct type - whatever
// "res" is below.
val emptyFuture = Future.successful(Seq.empty[Unit])
// This will only insert one at a time. You could use list.sliding to batch the
// inserts if that was important.
val temp1 = list.foldLeft(emptyFuture) { (previousFuture, trip) =>
previousFuture flatMap { previous =>
// Inner code copied from your example.
val data = for {
tripData <- TripDataRepository.insertQuery(trip.tripData)
subTripData <- SubTripDataRepository.insertBatchQuery(getUpdatedSubTripDataList(trip.subTripData, tripData.id))
} yield ((tripData, subTripData))
val res = db.run(data.transactionally)
previous :+ res
}
}