My code uses Slick 3.0. It has as a common db object.
object Common {
private [database] val db = Database.forURL(
url = // read from config,
user = // read from config,
password = // read from config
)
}
Then, in my database services object's, my methods look like:
private lazy val myTableQuery = TableQuery[MyTable]
def getTableObjects: Future[Seq[MyTableObject]] = {
val action = myTableQuery.result
Common.db.run(action)
}
where I'm re-using the Common.db throughout multiple services.
In Slick 3.0, what's the idiomatic way to run a DB call?
I saw in the Slick 2.0 docs that an implicit session can be used.
However, I'm not sure if what I'm doing is correct in Slick 3.0.
You no longer need an implicit session.
Currently mobile, check out the sample chapters of essential slick -http://underscore.io/training/courses/essential-slick/
It shows how to do it now.
I am one of the authors.
Jono
Related
I need to expose some models which don't used directly in REST API methods.
With springfox I used Docket's additionalModels method to programmatically add models to specification:
docket.additionalModels(
typeResolver.resolve(XModel1.class),
typeResolver.resolve(XModel2.class)
)
How to do it with springdoc?
I've created a dummy operation with dummy-parameter which includes all required models. But I feel the approach has space for improvement.
With OpenApiCustomiser , you have access to the OpenAPI Object.
You can add any object/operation you want without having to add annotations on your code.
You can have a look at the documentation for more details:
https://springdoc.org/#how-can-i-customise-the-openapi-object
In Kotlin
fun components(): Components {
val components = Components()
val converter = ModelConverters.getInstance()
val schema1 = converter.readAllAsResolvedSchema(XModel1::class.java)
val schema2 = converter.readAllAsResolvedSchema(XModel2::class.java)
schema1.referencedSchemas.forEach { s -> components.addSchemas(s.key, s.value) }
schema2.referencedSchemas.forEach { s -> components.addSchemas(s.key, s.value) }
return components
}
Additionally you may need to specify the property in application.yml:
springdoc:
remove-broken-reference-definitions: false
Lets say we modified an event to have a new field added. I understand that we can handle the serialization for event mapping changes in this documentation https://www.lagomframework.com/documentation/1.5.x/scala/Serialization.html but how does lagom know which version the event is? When declaring and defining case classes events, we do not specify the event version. So how does lagom serialization know which event version mapping to use?
In the image below, there is a field called fromVersion. How does lagom know the current version of events pulled from event store datastore?
So, to implement migration you add following code:
private val itemAddedMigration = new JsonMigration(2) {
override def transform(fromVersion: Int, json: JsObject): JsObject = {
if (fromVersion < 2) {
json + ("discount" -> JsNumber(0.0d))
} else {
json
}
}
}
override def migrations = Map[String, JsonMigration](
classOf[ItemAdded].getName -> itemAddedMigration
)
}
That means that all new events of type ItemAdded now will have version 2. All previous events will be treated as version 1.
It is defined in the class PlayJsonSerializer
Please see the following code:
private def parseManifest(manifest: String) = {
val i = manifest.lastIndexOf('#')
val fromVersion = if (i == -1) 1 else manifest.substring(i + 1).toInt
val manifestClassName = if (i == -1) manifest else manifest.substring(0, i)
(fromVersion, manifestClassName)
}
Also, you can check it in the database. I use Cassandra, I if I will open my database, in eventsbytag1 collection I can find the field ser_manifest where described version. Where is simple class - it is version 1, where you have specified additional '#2', it means version 2 and so on.
If you need more information about how it works, you can check method fromBinary in class PlayJsonSerializer.
I've created an REST api in scala using AKKA-HTTP, spray-json and Slick. For Authorization of route, I've used oauth2.
DAO to retrieve Data(Using Plain SQL):
def getAllNotes: Future[Seq[UserEntity]] = {
implicit val getUserResult = GetResult(r => UserEntity(r.<<, r.<<, r.<<, r.<<, r.<<, r.<<))
query(s"select id, email, password,created_at, updated_at, deleted_at from users", getUserResult)
}
DAO to retrieve Data(Slick Table):
def getAll(): Future[Seq[A]] = {
db.run(tableQ.result)
}
Here's the part of routing:
val route: Route = pathPrefix("auth") {
get {
path("tests") {
complete(userDao.getAll.map(u => u.toList))
} ~
path("test") {
complete(userDao.getAllNotes.map(u => u.toList))
} ~
path("testUsers") {
baseApi(userDao.getAllNotes)
} ~
path("users") {
baseApi(userDao.getAll())
}
}
}
implicit def baseApi(f: ToResponseMarshallable): Route = {
authenticateOAuth2Async[AuthInfo[OauthAccount]]("realm", oauth2Authenticator) { auth =>
pathEndOrSingleSlash {
complete(f)
}
}
}
Functionally, all routes are working as intended but the performance seems to be degrading when OAUTH2 and Slick Tables are used for getting data.
The respective results of above routes:
1. "users" => 10 request per second: OAUTH2: YES, Slick Table: YES
2. "testUsers" => 17 request per second: OAUTH2: YES, Slick Table: NO
3. "tests" => 500 request per second: OAUTH2: NO, Slick Table: YES
4. "test" => 5593 request per second: OAUTH2: NO, Slick Table: NO
My Problem
How can I optimize the REST request using OAUTH2 and Slick Table?
Would it be good practice if I used PLAIN SQL instead of Slick tables and Joins in all cases?
It seems that enabling Oauth2 has the biggest impact, however the overhead added by akka http is negligible compared to the network/service call done on oauth2Authenticator. Even if it is done in async manner you still need to configure correctly the execution context (good read Explaining AKKA Thread Pool Execturor Config parameters).
In regards to the Slick part it seems that you declare the implicit row mapper on each request (can be a class val property).
Take a look at the Compiled Queries and make sure that you allocate enough jdbc connections in your db connection pool configuration.
In any case the whole concept of this test does not seem very useful, one should have the minimal requirements(ex: min 100 requests/second) and then start building on top of that.
Part regarding Slick has been answered multiple times. Most recent answer is here: Cache Slick DBIO Actions
This should dramatically improve response times for plain Slick version.
I can't help you with OAUTH2 though :/
I have a Object like this:
// I want to test this Object
object MyObject {
protected val retryHandler: HttpRequestRetryHandler = new HttpRequestRetryHandler {
def retryRequest(exception: IOException, executionCount: Int, context: HttpContext): Boolean = {
true // implementation
}
}
private val connectionManager: PoolingHttpClientConnectionManager = new PoolingHttpClientConnectionManager
val httpClient: CloseableHttpClient = HttpClients.custom
.setConnectionManager(connectionManager)
.setRetryHandler(retryHandler)
.build
def methodPost = {
//create new context and new Post instance
val post = new HttpPost("url")
val res = httpClient.execute(post, HttpClientContext.create)
// check response code and then take action based on response code
}
def methodPut = {
// same as methodPost except use HttpPut instead HttpPost
}
}
I want to test this object by mocking dependent objects like httpClient. How to achieve this? can i do it using Mokito or any better way? If yes. How? Is there a better design for this class?
Your problem is: you created hard-to test code. You can turn here to watch some videos to understand why that is.
The short answer: directly calling new in your production code always makes testing harder. You could be using Mockito spies (see here on how that works).
But: the better answer would be to rework your production code; for example to use dependency injection. Meaning: instead of creating the objects your class needs itself (by using new) ... your class receives those objects from somewhere.
The typical (java) approach would be something like:
public MyClass() { this ( new SomethingINeed() ); }
MyClass(SomethingINeed incoming) { this.somethign = incoming; }
In other words: the normal usage path still calls new directly; but for unit testing you provide an alternative constructor that you can use to inject the thing(s) your class under test depends on.
I am trying to connect my Scala application to a Postgres cluster consisting of one master node and 3 slaves/read replicas. My application.conf looks like this today:
slick {
dbs {
default {
driver = "com.company.division.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://"${?DB_ADDR}":"${?DB_PORT}"/"${?DB_NAME}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
}
}
Based on Postgres' documentation, I can define the master and slaves all in one JDBC URL, which will give me some failover capabilities, like this:
jdbc:postgresql://host1:port1,host2:port2/database
However, if I want to separate my connections by read and write capabilities, I have to define two JDBC URls, like this:
jdbc:postgresql://node1,node2,node3/database?targetServerType=master
jdbc:postgresql://node1,node2,node3/database?targetServerType=preferSlave&loadBalanceHosts=true
How can I define two JDBC URLs within Slick? Should I define two separate entities under slick.dbs, or can my slick.dbs.default.db entity have multiple multiple URLs defined?
Found an answer from Daniel Westheide's blog post. To summarize, it can be done with a DB wrapper class and custom Effect types that provides specific rules to control where read-only queries are directed vs. write queries are directed.
Then your slick file would look like this:
slick {
dbs {
default {
driver = "com.yourdomain.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://"${?DB_PORT_5432_TCP_ADDR}":"${?DB_PORT_5432_TCP_PORT}"/"${?DB_NAME}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
readonly {
driver = "com.yourdomain.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = ${DB_READ_REPLICA_URL}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
}
}
And it's up to your DB wrapper class to route queries to either 'default' or 'readonly'