How to implement master/slave structure with squeryl and play framework - scala

I am running a play framework website that uses squeryl and mysql database.
I need to use squeryl to run all read queries to the slave and all write queries to the master.
How can I achieve this? either via squeryl or via jdbc connector itself.
Many thanks,

I don't tend to use MySQL myself, but here's an idea:
Based on the documentation here, the MySQL JDBC driver will round robin amongst the slaves if the readOnly attribute is properly set on the Connnection. In order to retrieve and change the current Connection you'll want to use code like
transaction {
val conn = Session.currentSession.connection
conn.setReadOnly(true)
//Your code here
}
Even better, you can create your own readOnlyTransaction method:
def readOnlyTransaction(f: => Unit) = {
transaction {
val conn = Session.currentSession.connection
val orig = conn.getReadOnly()
conn.setReadOnly(true)
f
conn.setReadOnly(orig)
}
}
Then use it like:
readOnlyTransaction {
//Your code here
}
You'll probably want to clean that up a bit so the default readOnly state is reset if an exception occurs, but you get the general idea.

Related

Cache Slick DBIO Actions

I am trying to speed up "SELECT * FROM WHERE name=?" kind of queries in Play! + Scala app. I am using Play 2.4 + Scala 2.11 + play-slick-1.1.1 package. This package uses Slick-3.1 version.
My hypothesis was that slick generates Prepared statements from DBIO actions and they get executed. So I tried to cache them buy turning on flag cachePrepStmts=true
However I still see "Preparing statement..." messages in the log which means that PS are not getting cached! How should one instructs slick to cache them?
If I run following code shouldn't the PS be cached at some point?
for (i <- 1 until 100) {
Await.result(db.run(doctorsTable.filter(_.userName === name).result), 10 seconds)
}
Slick config is as follows:
slick.dbs.default {
driver="slick.driver.MySQLDriver$"
db {
driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/staging_db?useSSL=false&cachePrepStmts=true"
user = "user"
password = "passwd"
numThreads = 1 // For not just one thread in HikariCP
properties = {
cachePrepStmts = true
prepStmtCacheSize = 250
prepStmtCacheSqlLimit = 2048
}
}
}
Update 1
I tried following as per #pawel's suggestion of using compiled queries:
val compiledQuery = Compiled { name: Rep[String] =>
doctorsTable.filter(_.userName === name)
}
val stTime = TimeUtil.getUtcTime
for (i <- 1 until 100) {
FutureUtils.blockFuture(db.compiledQuery(name).result), 10)
}
val endTime = TimeUtil.getUtcTime - stTime
Logger.info(s"Time Taken HERE $endTime")
In my logs I still see statement like:
2017-01-16 21:34:00,510 DEBUG [db-1] s.j.J.statement [?:?] Preparing statement: select ...
Also timing of this is also remains the same. What is the desired output? Should I not see these statements anymore? How can I verify if Prepared statements are indeed reused.
You need to use Compiled queries - which are exactly doing what you want.
Just change above code to:
val compiledQuery = Compiled { name: Rep[String] =>
doctorsTable.filter(_.userName === name)
}
for (i <- 1 until 100) {
Await.result(db.run(compiledQuery(name).result), 10 seconds)
}
I extracted above name as a parameter (because you usually want to change some parameters in your PreparedStatements) but that's definitely an optional part.
For further information you can refer to: http://slick.lightbend.com/doc/3.1.0/queries.html#compiled-queries
For MySQL you need to set an additional jdbc flag, useServerPrepStmts=true
HikariCP's MySQL configuration page links to a quite useful document that provides some simple performance tuning configuration options for MySQL jdbc.
Here are a few that I've found useful (you'll need to & append them to jdbc url for options not exposed by Hikari's API). Be sure to read through linked document and/or MySQL documentation for each option; should be mostly safe to use.
zeroDateTimeBehavior=convertToNull&characterEncoding=UTF-8
rewriteBatchedStatements=true
maintainTimeStats=false
cacheServerConfiguration=true
avoidCheckOnDuplicateKeyUpdateInSQL=true
dontTrackOpenResources=true
useLocalSessionState=true
cachePrepStmts=true
useServerPrepStmts=true
prepStmtCacheSize=500
prepStmtCacheSqlLimit=2048
Also, note that statements are cached per thread; depending on what you set for Hikari connection maxLifetime and what server load is, memory usage will increase accordingly on both server and client (e.g. if you set connection max lifetime to just under MySQL default of 8 hours, both server and client will keep N prepared statements alive in memory for the life of each connection).
p.s. curious if bottleneck is indeed statement caching or something specific to Slick.
EDIT
to log statements enable the query log. On MySQL 5.7 you would add to your my.cnf:
general-log=1
general-log-file=/var/log/mysqlgeneral.log
and then sudo touch /var/log/mysqlgeneral.log followed by a restart of mysqld. Comment out above config lines and restart to turn off query logging.

Is it possible to change the socket timout for a single select?

Our project uses mongodb to store its documents. We have it configured in our DataSource.groovy file with a socketTimeout of 60000. Most of our queries do not get any where near that threshold, but I assume we have it just in case something goes wrong.
Anyway, now I am working on a query that is known to be a long running query. It is pretty much guarenteed to be doing a tablespace scan, which we acknowledge and are ok with for now. The problem is the amount of data we currently have is causing it to exceed the socket timeout on random cases. Add to the issue that we are expecting the amount of data to grow considerably larger in the future.
So my question is, is it possible to increase/remove the socket timeout for a single mongodb select? I've found the following:
grailsApplication.mainContext.getBean('mongoDatastore').mongo.mongoOptions.socketTimeout = 0
That appears to work, but that also is changing the socket timeout for everything application wide I assume, which we do not want. Help!
Update: After a lot of trial and error, I found a way to open another mongo connection that reuses the configuration, but leaves off the socketTimeout and appears to work.
class MongoService {
def grailsApplication
def openMongoClientWithoutSocketTimeout() {
def datastore = grailsApplication.mainContext.getBean('mongoDatastore')
def config = grailsApplication.config.grails.mongo
def credentials = MongoCredential.createCredential(config.username, config.databaseName, config.password.toCharArray())
def options = MongoClientOptions.builder()
.autoConnectRetry(config.options.autoConnectRetry)
.connectTimeout(config.options.connectTimeout)
.build()
new MongoClient(datastore.mongo.getAllAddress(), [credentials, credentials], options)
}
def selectCollection(mongoClient, collection) {
def mongoConfig = grailsApplication.config.grails.mongo
mongoClient.getDB(mongoConfig.databaseName).getCollection(collection)
}
}
Not sure if this is the simplest solution though...
What version of mongo do you use? Possible maxTimeMS can help you?
From document page :
db.collection.find({description: /August [0-9]+, 1969/}).maxTimeMS(50)

Manage Transactions on Business Layer

I want to use TransactionScope class in my business layer to manage database operation in data access layer.
Here is my sample code. When i execute it, it tries to enable the dtc. I want to do the operation without enable dtc.
I already checked https://entlib.codeplex.com/discussions/32592 article. It didn't work for me. I read many articles on this subject but none of them really touch enterprise library or i didn't see.
by the way, i am able to use TransactionScope using dotnet sql client and it works pretty well.
what would be the inside of SampleInsert() method?
Thanks,
Business Layer method:
public void SampleInsert()
{
using (TransactionScope scope = new TransactionScope())
{
Sample1DAL dal1 = new Sample1DAL(null);
Sample2DAL dal2 = new Sample2DAL(null);
Sample3DAL dal3 = new Sample3DAL(null);
dal1.SampleInsert();
dal2.SampleInsert();
dal3.SampleInsert();
scope.Complete();
}
}
Data Access Layer method:
//sampleInsert method structurally same for each 3 dal
public void SampleInsert()
{
Database database = DatabaseFactory.CreateDatabase(Utility.DATABASE_INFO); ;
using (DbConnection conn = database.CreateConnection())
{
conn.Open();
DbCommand cmd = database.GetStoredProcCommand("P_TEST_INS", "some value3");
database.ExecuteNonQuery(cmd);
}
}
Hi yes this will enable dtc because you are creating 3 DB connections within one TransactionScope . When more than one DB connection is created within same TransactionScope the local transaction escalate to Distributed Transaction and hence dtc will be enabled to manage Distributed Trnsactions.You will have to do it in a way that only one DB connection is created for entire TransactionScope. I hope this will give you an idea.
After research and waching query analyzer, I changed the SampleInsert() body as follows and it worked. The problem was as ethicallogics mentioned opening new connection each time i access the database.
public void SampleInsert()
{
Database database = DatabaseFactory.CreateDatabase(Utility.DATABASE_INFO);
using (DbCommand cmd = database.GetStoredProcCommand("P_TEST_INS", "some value1"))
{
database.ExecuteNonQuery(cmd);
}
}

How can I connect to a postgreSQL database in scala?

I want to know how can I do following things in scala?
Connect to a postgreSQL database.
Write SQL queries like SELECT , UPDATE etc. to modify a table in that database.
I know that in python I can do it using PygreSQL but how to do these things in scala?
You need to add dependency "org.postgresql" % "postgresql" % "9.3-1102-jdbc41" in build.sbt and you can modify following code to connect and query database. Replace DB_USER with your db user and DB_NAME as your db name.
import java.sql.{Connection, DriverManager, ResultSet}
object pgconn extends App {
println("Postgres connector")
classOf[org.postgresql.Driver]
val con_st = "jdbc:postgresql://localhost:5432/DB_NAME?user=DB_USER"
val conn = DriverManager.getConnection(con_str)
try {
val stm = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)
val rs = stm.executeQuery("SELECT * from Users")
while(rs.next) {
println(rs.getString("quote"))
}
} finally {
conn.close()
}
}
I would recommend having a look at Doobie.
This chapter in the "Book of Doobie" gives a good sense of what your code will look like if you make use of this library.
This is the library of choice right now to solve this problem if you are interested in the pure FP side of Scala, i.e. scalaz, scalaz-stream (probably fs2 and cats soon) and referential transparency in general.
It's worth nothing that Doobie is NOT an ORM. At its core, it's simply a nicer, higher-level API over JDBC.
Take look at the tutorial "Using Scala with JDBC to connect to MySQL", replace the db url and add the right jdbc library. The link got broken so here's the content of the blog:
Using Scala with JDBC to connect to MySQL
A howto on connecting Scala to a MySQL database using JDBC. There are a number of database libraries for Scala, but I ran into a problem getting most of them to work. I attempted to use scala.dbc, scala.dbc2, Scala Query and Querulous but either they aren’t supported, have a very limited featured set or abstracts SQL to a weird pseudo language.
The Play Framework has a new database library called ANorm which tries to keep the interface to basic SQL but with a slight improved scala interface. The jury is still out for me, only used on one project minimally so far. Also, I’ve only seen it work within a Play app, does not look like it can be extracted out too easily.
So I ended up going with basic Java JDBC connection and it turns out to be a fairly easy solution.
Here is the code for accessing a database using Scala and JDBC. You need to change the connection string parameters and modify the query for your database. This example was geared towards MySQL, but any Java JDBC driver should work the same with Scala.
Basic Query
import java.sql.{Connection, DriverManager, ResultSet};
// Change to Your Database Config
val conn_str = "jdbc:mysql://localhost:3306/DBNAME?user=DBUSER&password=DBPWD"
// Load the driver
classOf[com.mysql.jdbc.Driver]
// Setup the connection
val conn = DriverManager.getConnection(conn_str)
try {
// Configure to be Read Only
val statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)
// Execute Query
val rs = statement.executeQuery("SELECT quote FROM quotes LIMIT 5")
// Iterate Over ResultSet
while (rs.next) {
println(rs.getString("quote"))
}
}
finally {
conn.close
}
You will need to download the mysql-connector jar.
Or if you are using maven, the pom snippets to load the mysql connector, you’ll need to check what the latest version is.
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.12</version>
</dependency>
To run the example, save the following to a file (query_test.scala) and run using, the following specifying the classpath to the connector jar:
scala -cp mysql-connector-java-5.1.12.jar:. query_test.scala
Insert, Update and Delete
To perform an insert, update or delete you need to create an updatable statement object. The execute command is slightly different and you will most likely want to use some sort of parameters. Here’s an example doing an insert using jdbc and scala with parameters.
// create database connection
val dbc = "jdbc:mysql://localhost:3306/DBNAME?user=DBUSER&password=DBPWD"
classOf[com.mysql.jdbc.Driver]
val conn = DriverManager.getConnection(dbc)
val statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE)
// do database insert
try {
val prep = conn.prepareStatement("INSERT INTO quotes (quote, author) VALUES (?, ?) ")
prep.setString(1, "Nothing great was ever achieved without enthusiasm.")
prep.setString(2, "Ralph Waldo Emerson")
prep.executeUpdate
}
finally {
conn.close
}
We are using Squeryl, which is working well so far for us. Depending on your needs it may do the trick.
Here is a list of supported DB's and the adapters
If you want/need to write your own SQL, but hate the JDBC interface, take a look at O/R Broker
I would recommend the Quill query library. Here is an introduction post by Li Haoyi to get started.
TL;DR
{
import io.getquill._
import com.zaxxer.hikari.{HikariConfig, HikariDataSource}
val pgDataSource = new org.postgresql.ds.PGSimpleDataSource()
pgDataSource.setUser("postgres")
pgDataSource.setPassword("example")
val config = new HikariConfig()
config.setDataSource(pgDataSource)
val ctx = new PostgresJdbcContext(LowerCase, new HikariDataSource(config))
import ctx._
}
Define a class ORM:
// mapping `city` table
case class City(
id: Int,
name: String,
countryCode: String,
district: String,
population: Int
)
and query all items:
# ctx.run(query[City])
cmd11.sc:1: SELECT x.id, x.name, x.countrycode, x.district, x.population FROM city x
val res11 = ctx.run(query[City])
^
res11: List[City] = List(
City(1, "Kabul", "AFG", "Kabol", 1780000),
City(2, "Qandahar", "AFG", "Qandahar", 237500),
City(3, "Herat", "AFG", "Herat", 186800),
...
ScalikeJDBC is quite easy to use. It allows you to write raw SQL using interpolated strings.
http://scalikejdbc.org/

correct usage of Zend_Db

I'm currently using the Zend_Db class to manage my database connections.
I had a few questions about it.
Does it manage opening connections smartly? (eg, I have a connection already open, does it know to take use of it - or do I have to constantly check if theres' already an open connection before I open a new one?)
I use the following code to fetch results (fetching in FETCH_OBJ mode):
$final = $result->fetchAll();
return $final[0]->first_name;
for some reason, fetchRow doesn't work - so I constantly use fetchAll, even when I'll only have one result (like searching WHERE id= number and id is a PK)
My question is - how much more time/memory am I sacrificing when I use fetchAll and not fetchRow, even when there's only result?
I've created the following class to manage my connections:
require 'Zend/Db.php';
class dbconnect extends Zend_Db
{
function init ()
{
$params = array (......
return Zend_Db::factory ( 'PDO_MYSQL', $params );
}
}
and then I call
$handle = dbconnect::init
$handle->select()....
is this the best way? does anyone have a better idea?
Thanks!
p.s. I'm sorry the code formatting came out sloppy here..
Lots of questions!
Does it manage opening connections
smartly?
Yes, when you run your first query, a connection is created, and subsequent queries use the same connection. This is true if you're reusing the same Zend_Db adapter. I usually make it available to my entire application using Zend_Registry:
$db = Zend_Db::factory(...) // create Db instance
Zend_Registry::set('db', $db);
//in another class or file somewhere
$db = Zend_Registry::get('db');
$db->query(...)//run a query
The above code usually goes in your application bootstrap. I wouldn't bother to extend the Zend_Db class just to initialise and instance of it.
Regarding fetchRow - I believe the key difference is that the query run is limited to 1 row, and the object returned is a Zend_Db_Table_Row rather than a Zend_Db_Table_Rowset (like an array of rows), and does not perform significantly slower.
fetchRow should be fine so post some code that's not working as there's probably a mistake somewhere.
addition to dcaunt's answer:
FetchAll returns array OR Zend_Db_Talbe_Rowset - depending on if you execute $zendDbTableModel->fetchAll() or $dbAdapter->fetchAll()
Same goes for fetchRow(). If you fail to make fetchRow working for models, it's easier to use $model->getAdapter()->fetchRow($sqlString);