Spray IO with Slick Use Conf File - scala

I'm using Spray with Slick. Spray is actually very easy to use, and the same goes with Slick. However, the infamous Slick way to connect to Database is like:
Database.forURL("jdbc:mysql://localhost:3306/SprayBlog?characterEncoding=UTF-8", user="xxxx", password="xxx", driver = "com.mysql.jdbc.Driver") withSession {
implicit session =>
(Article.articles.ddl ++ User.users.ddl).create
}
I hate typing this much whenever I do a database connection. I have used Play-Slick framework before, and Play has this application.conf, with which I can store my database connection address, username and password. I don't know if this is true, but shouldn't people store their database info on an encrypted file, and I may be wrong but I feel conf is blocked from outside access and encrypted.
So is there a way for me to call database manipulations easier?? If I do want to put the info in the conf, how can I access it?

With Slick 2.1.0 you can use Database.forConfig().
In application.conf:
db {
url = "jdbc:mysql://localhost/DatabaseName"
driver = "com.mysql.jdbc.Driver"
user = "root"
password = ""
}
In your database access code:
import scala.slick.driver.MySQLDriver.simple._
import models.Item
import tables.ItemTable
class ItemService {
val items = TableQuery[ItemTable]
def all: List[Item] = Database.forConfig("db") withSession { implicit session: Session =>
items.list
}
}

store your db in a val or write a def that abstracts over what you feel is repetitive. withSession is good for connection scoping. I don't know spray conf files, but they may use Typesafe config, which is a library that allows to read the file. Spray probably also exposes its config through an api.

Related

Mocking DynamoDB in Spark with Mockito

I want to mock Utilities function dynamoDBStatusWrite so that when my spark program will run, it will not hit the DynamoDB.
Below is my mocking and test case stuff
class FileConversion1Test extends FlatSpec with MockitoSugar with Matchers with ArgumentMatchersSugar with SparkSessionTestWrapper {
"File Conversion" should "convert the file to" in {
val utility = mock[Utilities1]
val client1 = mock[AmazonDynamoDB]
val dynamoDB1 =mock[DynamoDB]
val dynamoDBFunc = mock[Utilities1].dynamoDBStatusWrite("test","test","test","test")
val objUtilities1 = new Utilities1
FieldSetter.setField(objUtilities1,objUtilities1.getClass.getDeclaredField("client"),client1)
FieldSetter.setField(objUtilities1,objUtilities1.getClass.getDeclaredField("dynamoDB"),dynamoDB1)
FieldSetter.setField(objUtilities1,objUtilities1.getClass.getField("dynamoDBStatusWrite"),dynamoDBFunc)
when(utility.dynamoDBStatusWrite("test","test","test","test")).thenReturn("pass")
assert(FileConversion1.fileConversionFunc(spark,"src/test/inputfiles/userdata1.csv","parquet","src/test/output","exec1234567","service123")==="passed")
}
}
My spark program should not try to connect dynamoDB. but is trying to connect
You have 2 problems there, for starters the fact that you mock something doesn't replace it automatically in your system, you need to build your software so components are injected and so in the test you would be providing a mock version of them. ie, fileConversionFunc should receive another parameter with the connector to Dynamo.
That said, it's considered a bad practice to mock library/3rd party classes, what you should do there is to create your own component that encapsulates the interaction with Dynamo, and then mock your component, as it's an API you control.
You can find a detailed explanation of why here

Throttle or debounce method calls

Let's say I have a method that permits to update some date in DB:
def updateLastConsultationDate(userId: String): Unit = ???
How can I throttle/debounce that method easily so that it won't be run more than once an hour per user.
I'd like the simplest possible solution, not based on any event-bus, actor lib or persistence layer. I'd like an in-memory solution (and I am aware of the risks).
I've seen solutions for throttling in Scala, based on Akka Throttler, but this really looks to me overkill to start using actors just for throttling method calls. Isn't there a very simple way to do that?
Edit: as it seems not clear enough, here's a visual representation of what I want, implemented in JS. As you can see, throttling may not only be about filtering subsequent calls, but also postponing calls (also called trailing events in js/lodash/underscore). The solution I'm looking for can't be based on pure-synchronous code only.
This sounds like a great job for a ReactiveX-based solution. On Scala, Monix is my favorite one. Here's the Ammonite REPL session illustrating it:
import $ivy.`io.monix::monix:2.1.0` // I'm using Ammonite's magic imports, it's equivalent to adding "io.monix" %% "monix" % "2.1.0" into your libraryImports in SBT
import scala.concurrent.duration.DurationInt
import monix.reactive.subjects.ConcurrentSubject
import monix.reactive.Consumer
import monix.execution.Scheduler.Implicits.global
import monix.eval.Task
class DbUpdater {
val publish = ConcurrentSubject.publish[String]
val throttled = publish.throttleFirst(1 hour)
val cancelHandle = throttled.consumeWith(
Consumer.foreach(userId =>
println(s"update your database with $userId here")))
.runAsync
def updateLastConsultationDate(userId: String): Unit = {
publish.onNext(userId)
}
def stop(): Unit = cancelHandle.cancel()
}
Yes, and with Scala.js this code will work in the browser, too, if it's important for you.
Since you ask for the simplest possible solution, you can store a val lastUpdateByUser: Map[String, Long], which you would consult before allowing an update
if (lastUpdateByUser.getOrElse(userName, 0)+60*60*1000 < System.currentTimeMillis) updateLastConsultationDate(...)
and update when a user actually performs an update
lastUpdateByUser(userName) = System.currentTimeMillis
One way to throttle, would be to maintain a count in a redis instance. Doing so would ensure that the DB wouldn't be updated, no matter how many scala processes you were running, because the state is stored outside of the process.

Why do I need to import driver to use lifted queries?

I'm using slick 2.0 for my interaction with the database.
As recommended I've added external connection pool using BonesCP.
val driver = Class.forName(Database.driver)
val ds = new BoneCPDataSource();
ds.setJdbcUrl(Database.jdbcUri);
ds.setUsername(Database.user);
ds.setPassword(Database.password);
using it I've created my connection:
scala.slick.jdbc.JdbcBackend.Database.forDataSource(ds)
db withSession {implicit session =>
(...)
}
}
Now if I do on my TableQuery object something like this:
provisioning.foreach {(...)}
it says that there is no foreach method.
So I've imported:
import scala.slick.driver.PostgresDriver.simple._
and now everything works well.
What I don't like is that my code is tied to a certain database implementation.
Can I somehow make it read the "db dialect" from the config file?
So now I know how this should be done properly.
I've been to a presentation by Stefan Zeiger on Slick who is the original author of the library and he pointed out that since PostgresDriver as well as other drivers is a trait we can mixin it instead of importing. Thus having chance to dynamically choose the driver that fits our database.

Close Connection for Mongodb using Casbah API

I am not getting any useful information about "how to close connection for mongodb using casbah API". Actually, I have defined multiple methods and in each method I need to establish a connection with mongodb. After working I need to close that too. I am using Scala.
one of the method like (code example in scala):
import com.mongodb.casbah.Imports._
import com.mongodb.casbah.MongoConnection
def index ={
val mongoConn = MongoConnection(configuration("hostname"))
val log = mongoConn("ab")("log")
val cursor = log.find()
val data = for {x <- cursor} yield x.getAs[BasicDBObject]("message").get
html.index(data.toList)
//mongoConn.close() <-- here i want to close the connection but this .close() is not working
}
It is unclear, from your question why exactly close is not working. Does it throw some exception, it is not compiling, or has no effect?
But since MongoConnection is a thin wrapper over com.mongodb.Mongo, you could work with underlying Mongo directly, just like in plain old Java driver:
val mongoConn = MongoConnection(configuration("hostname"))
mongoConn.underlying.close()
Actually, that's exactly, how close is implemented in casbah.
Try using .close instead. If a function doesn't have arguments in scala, you sometimes don't use parentheses after it.
EDIT: I had wrong information, edited to include correct information + link.

Playframework evolutions files compatible with both postgres and h2

I've been developing a web site with the Play framework (scala) using H2 as a backend. Testing is nicely integrated, especially with the ability to run tests against an in-memory H2 db.
Now I'd like to move my datastore over to Postgres for various convenience reasons. This leaves me with a problem: How to continue to test and retain the simplicity of a fresh db for each test run? I see on the net that some people manage to run live against postgres and test against H2. However the two are not entirely compatible at the SQL level (even with H2 in Postgres compatibility mode). For instance SERIAL, BIGSERIAL and BYTEA are not supported on H2.
Can I do this by using a constrained compatible intersection of both dialects, or is there another technique I'm missing?
Thanks for any help.
Alex
i know this is an older post, but it looks like there isn't an obvious solution still a few years later. as a short term fix, in play 2.4.x-2.5.x (so far only tested there), you can alter the way evolutions get applied during tests by creating a custom evolutions reader:
package support
import play.api.db.evolutions.{ClassLoaderEvolutionsReader, Evolutions, ResourceEvolutionsReader}
import java.io.{ByteArrayInputStream, InputStream}
import java.nio.charset.StandardCharsets
import scala.io.Source
import scala.util.Try
class EvolutionTransformingReader(
classLoader: ClassLoader = classOf[ClassLoaderEvolutionsReader].getClassLoader,
prefix: String = "")
extends ResourceEvolutionsReader {
def loadResource(db: String, revision: Int): Option[InputStream] =
for {
stream <- Option(classLoader.getResourceAsStream(prefix + Evolutions.resourceName(db, revision)))
lines <- Try(Source.fromInputStream(stream).getLines).toOption
updated = lines map convertPostgresLinesToH2
} yield convertLinesToInputStream(updated)
private val ColumnRename = """(?i)\s*ALTER TABLE (\w+) RENAME COLUMN (\w+) TO (\w+);""".r
private def convertPostgresLinesToH2(line: String): String =
line match {
case ColumnRename(tableName, oldColumn, newColumn) =>
s"""ALTER TABLE $tableName ALTER COLUMN $oldColumn RENAME TO $newColumn;"""
case _ => line
}
private def convertLinesToInputStream(lines: Iterator[String]): InputStream =
new ByteArrayInputStream(lines.mkString("\n").getBytes(StandardCharsets.UTF_8))
}
then pass it into the place where you apply evolutions during your tests:
Evolutions.applyEvolutions(registry.database, new EvolutionTransformingReader())
note that the reader is still in a pretty dumb state (it assumes the SQL statements are oneliners, which is not guaranteed), but this should be enough to get anyone started.
As evolution files use SQL directly, unless you limited yourself to a common cross-db-compatible subset of SQL you may have issues.
There is no real solution on that, but you can still use a fresh db for testing. Just set the following:
%test.jpa.ddl=create-drop
%test.db.driver=org.postgresql.Driver
%test.db=<jdbc url>
//etc
This should create a new postgres connection for test, create the db from scratch, run the evolutions, do the tests and remove all data once done.