Vertx Web: How to split and organize routes across multiple files? - scala

So far I'm really loving Vertx. The documentation is great, and the cross-language support is amazing.
However, the examples and documentation for my specific problem all seem to be out of date. I guess the API has changed a bit since 3.4.x (I'm currently using 3.9.1 with Scala 2.13.1.
I'd like to be able to split my routes among multiple files for the purpose of keeping things organized. For example, I'd like to have a UserRoutes file, and make a separate file for TodoRoutes, and both of those files can be used in my ServerVerticle.
The only way I've found to do this is basically:
UserRoutes:
object UserRoutes {
def createUser(context: RoutingContext): Unit = {
// do work
}
def login(context: RoutingContext): Unit = {
// do work
}
}
ServerVerticle:
class ServerVerticle(vertx: Vertx) extends AbstractVerticle {
override def start(): Unit = {
val router: Router = Router.router(vertx)
router.post("/user/new").handle(UserRoutes.createUser)
router.post("/user/login").handle(UserRoutes.login)
....
}
}
What I would really like to do instead:
object UserRoutes {
// somehow get a reference to `router` or create a new one
router.post("/user/new", (context: RoutingContext) => {
// do work
})
router.post("/user/login", (context: RoutingContext) => {
// do work
})
}
The reason I'd prefer this is because then it is easier to see exactly what is being done in UserRoutes, such as what path is being used, what parameters are required, etc.
I tried taking a similar approach to this example application, but apparently that's not really possible with the Vertx API as it exists in 3.9?
What is the best way to do this? What am I missing something? How do large REST APIs break up their routes?

I suppose one such way to do this would be something like the following:
UserRoutes
class UserRoutes(vertx: Vertx) {
val router: Router = {
val router = Router.router(vertx)
router.get("/users/login").handler(context => {
// do work
})
router
}
}
And then in the ServerVerticle:
...
server
.requestHandler(new UserVerticle(vertx).router)
.requestHandler(new TodoVerticle(vertx).router)
.listen(...)
This breaks stuff up nicely, but I still need to figure out a decent way to avoid having to repeats the cors options and stuff in each ___Routes file. There are plenty of way to do this of course, but the question remain: is this the right approach?
Further, I could really mix in some parts of the approach outlined in the initial question. The UserRoutes class could still have the createUser() method, and I could simply use this in the apply() call or somewhere else.
Sub-routers is another approach but there are still some issues with this.
It would also be nice if each ___Routes file could create it's own Verticle. If each set of Routes was running on it's own thread this could speed things up quite nicely.
Truth be told... as always in Scala there are a plethora of ways to tackle this. But which is the best, when we want:
Logical organization
Utilize the strengths of Vertx
Sub-verticles maybe?
Avoid shared state
I need guidance! There's so many options I don't know where to start.
EDIT:
Maybe it's best to avoid assigning each group of routes to a Verticle, and letting vertx do it's thing?

Related

How to correctly use suspend functions with coroutines on webflux?

I'm new to reactive programming and because I've already used kotlin with spring-web in the past, I decided to go to spring-webflux on this new project I'm working on. Then I discovered Mono and Flux apis and decided to use spring-data-r2dbc to keep full reactive stack (I'm aware I don't know how far this new project could be from meeting all reactive expectations, I'm doing this to learn a new tool, not because this is the perfect scenario for this new tool)
then I noticed I could replace all reactive streams apis from webflux with kotlin's native coroutines. I also opted by coroutines simply to learn and have less 'external frameworky' code
my application is quite simple (it's an url shortener):
1. parse some url out of http request's body into 3 parts
2. exchange each part to its postgres id on each respective table
3. concat these 3 ids into a new url, sending an 200 http response with this new url
my reactive controller is
#Configuration
class UrlRouter {
#Bean
fun urlRoutes(
urlHandler: UrlHandler,
redirectHandler: RedirectHandler
) = coRouter {
POST("/e", urlHandler::encode)
GET("/{*url}", redirectHandler::redirect)
}
}
as you can imagine, UrlHandler is responsible for the steps numbered above and RedirectHandler does the oposite: receiving an encoded url, it redirects to the right url received on number 1.
question 1: checking on coRouter, I assumed that for each http call, spring will start a new coroutine to resolve that call(oposing to a new thread on traditional spring-web), and each of these can create and depend on several other sub coroutines. Is this right? Does this hierarchy exist?
here's my UrlHandler fragment:
#Component
class UrlHandler(
private val cache: CacheService,
#Value("\${redirect-url-prefix}") private val prefix: String
) {
companion object {
val mapper = jacksonObjectMapper()
}
suspend fun encode(serverRequest: ServerRequest): ServerResponse =
try {
val bodyMap: Map<String, String> = mapper.readValue(serverRequest.awaitBody<String>())
// parseUrl being a string extension function just splitting
// that could throw IndexOutOfBoundsException
val (host, path, query) = bodyMap["url"]!!.parseUrl()
val hostId: Long = cache.findIdFromHost(host)
val pathId: Long? = cache.findIdFromPath(path)
val queryId: Long? = cache.findIdFromQuery(query)
val encodedUrl = "$prefix/${someOmmitedStringConcatenation(hostId, pathId, queryId)}"
ok().bodyValueAndAwait(mapOf("url" to encodedUrl))
} catch (e: IndexOutOfBoundsException) {
ServerResponse.badRequest().buildAndAwait()
}
all three findIdFrom*** calls try to retrieve an existing id and if it doesn't exist, save new entity and return new id from postgres sequence. This is done by CoroutineCrudRepository interfaces. Since my methods should always suspend, all 3 findIdFrom*** also suspend:
#Repository
interface HostUrlRepo : CoroutineCrudRepository<HostUrl, Long> {
suspend fun findByHost(host: String): HostUrl?
}
question 2: looking here I've found either invoke reactive query methods or have native suspended functions. Since I've read methods should always suspend, I've decided to keep myself using suspend. Is this bad/wrong in any way?
these 3 findIdFrom*** are independent and could be called to run in parallel and then only at someOmmitedStringConcatenation I should wait for any unfinished calls to actually build my encoded url
question 3: since every single method has the suspend modifier, it will run exactly as on traditional imperative sequential paradigm (wasting any benefit from parallel programming) ?
question 4: is this a valid scenario for coroutines usage? If so, how should I change my code to best fit the parallelism I want above?
possible solutions I've found for question 4:
question 4.1: source 1 inside each findIdFrom*** wrap it with withContext(Dispatchers.IO){ /*actual code here*/ } and then on encode function:
coroutineScope {
val hostIdDeferred = async { findIdFrom***() }
val pathIdDeferred = async { findIdFrom***() }
val queryIdDeferred = async { findIdFrom***() }
}
and when I want to use them, just use hostIdDeferred.await() to get the value. If I'm using Dispatchers.IO scope to run code inside new children coroutines, why coroutineScope is necessary? Is this the correct way, specifying a scope to the new coroutine child and then using coroutineScope to have a deferred val?
question 4.2: source 2 val resultOne = Async(Dispatchers.IO) { function1() } Intellij wasn't able to recognize/import any Async expression. How can I use this one and how it differs from previous one?
I'm open to improve and clarify any point on this question
I'll try to answer some of your questions:
q2: No, nothing wrong with it. Suspend methods can propagate all the way back to a controller. If your controllers are reactive, i.e. if you use RSocket with org.springframework.messaging.handler.annotation.MessageMapping, then even even controller methods can be suspend.
q3: right, but each method is still your source code is much simpler
q4.2: I wouldn't consider that website as a trustworthy source. There is an official documentation with examples: async

How to properly pass many dependencies (external APIs) to a class in Scala?

How to properly pass many dependencies (external APIs) to a class in Scala?
I'm working on the application that uses many APIs to collect data. For API's I have trait like below.
trait api {
def url: Foo
def parse: Bar
}
Also, there are about 10 implementations of the api trait (one for each API). Inside a parent actor, I want to create a child actor for every external API. I created new trait and implementation,
trait ExternalApis {
val apiList: List[api]
}
object MyApis extends ExternalApis {
val apiList = List(new ApiImpl1, ..., new ApiImpl10)
}
so now I can pass MyApis object (or any other implementation of ExternalApis) to parent actor and map over apiList for creating such child actors.
It seems to me I'm missing something. Are there more proper ways to do it?
The implementation that you have made looks nearly ready. Just something I would like to add are:
Passing an API List may not be the most ideal way to do such a thing. Anytime you would like to add/remove something to the API list, you would have to change different areas of the code. A suggestion would be to read this from a config folder, where the config folders would contain things such as url, username, password etc.
If you could provide more insight on the usage of these API's, it would help my answer a lot.
Hope this helped!

Transactional method in Scala Play with Slick (similar to Spring #Transactional, maybe?)

I know scala, as a funcional language, is supposed to work differently from a common OO language, such as Java, but I'm sure there has to be a way to wrap a group of database changes in a single transaction, ensuring atomicity as well as every other ACID property.
As explained in the slick docs (http://slick.lightbend.com/doc/3.1.0/dbio.html), DBIOAction allows to group db operations in a transaction like this:
val a = (for {
ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
val f: Future[Unit] = db.run(a)
However, my use case (and most real world examples I can think of), I have a code structure with a Controller, which exposes the code for my REST endpoint, that controller calls multiple services and each service will delegate database operations to DAOs.
A rough example of my usual code structure:
class UserController #Inject(userService: UserService) {
def register(userData: UserData) = {
userService.save(userData).map(result => Ok(result))
}
}
class UserService #Inject(userDao: UserDao, addressDao: AddressDao) {
def save(userData: UserData) = {
for {
savedUser <- userDao.save(userData.toUser)
savedAddress <- addressDao.save(userData.addressData.toAddress)
} yield savedUser.copy(address = savedAddress)
}
}
class SlickUserDao {
def save(user: User) = {
db.run((UserSchema.users returning UserSchema.users)).insertOrUpdate(user)
}
}
This is a simple example, but most have more complex business logic in the service layer.
I don't want:
My DAOs to have business logic and decide which database operations to run.
Return DBAction from my DAOs and expose the persistency classes. That completely defeats the purpose of using DAOs in the first place and makes further refactorings much harder.
But I definitely want a transaction around my entire Controller, to ensure that if any code fails, all the changes done in the execution of that method will be rolled back.
How can I implement full controller transactionality with Slick in a Scala Play application? I can't seem to find any documentation on how to do that.
Also, how can I disable auto-commit in slick? I'm sure there is a way and I'm just missing something.
EDIT:
So reading a bit more about it, I feel now I understand better how slick uses connections to the database and sessions. This helped a lot: http://tastefulcode.com/2015/03/19/modern-database-access-scala-slick/.
What I'm doing is a case of composing in futures and, based on this article, there's no way to use the same connection and session for multiple operation of the kind.
Problem is: I really can't use any other kind of composition. I have considerable business logic that needs to be executed in between queries.
I guess I can change my code to allow me to use action composition, but as I mentioned before, that forces me to code my business logic with aspects like transactionality in mind. That shouldn't happen. It pollutes the business code and it makes writing tests a lot harder.
Any workaround this issue? Any git project out there that sorts this out that I missed? Or, more drastic, any other persistence framework that supports this? From what I've read, Anorm supports this nicely, but I may be misunderstanding it and don't want to change framework to find out it doesn't (like it happened with Slick).
There is no such thing as transactional annotations or the like in slick. Your second "do not want" is actually the way to go. It's totally reasonable to return DBIO[User] from your DAO which does not defeat their purpose at all. It's the way slick works.
class UserController #Inject(userService: UserService) {
def register(userData: UserData) = {
userService.save(userData).map(result => Ok(result))
}
}
class UserService #Inject(userDao: UserDao, addressDao: AddressDao) {
def save(userData: UserData): Future[User] = {
val action = (for {
savedUser <- userDao.save(userData.toUser)
savedAddress <- addressDao.save(userData.addressData.toAddress)
whatever <- DBIO.successful(nonDbStuff)
} yield (savedUser, savedAddress)).transactionally
db.run(action).map(result => result._1.copy(result._2))
}
}
class SlickUserDao {
def save(user: User): DBIO[User] = {
(UserSchema.users returning UserSchema.users).insertOrUpdate(user)
}
}
The signature of save in your service class is still the same.
No db related stuff in controllers.
You have full control of transactions.
I cannot find a case where the code above is harder to maintain / refactor compared to your original example.
There is also a quite exhaustive discussion that might be interesting for you. See Slick 3.0 withTransaction blocks are required to interact with libraries.

Configuring actor behavior using typesafe Config and HOCON

I am building a large agent-based / multi-agent model of a stock exchange using Akka/Play/Scala, etc and I am struggling a bit to understand how to configure my application. Below is a snippet of code that illustrates an example of the type of problem I face:
class Exchange extends Actor {
val orderRoutingLogic = new OrderRoutingLogic()
val router = {
val marketsForSecurities = securities.foreach { security =>
val marketForSecurity = context.actorOf(Props[DoubleAuctionMarket](
new DoubleAuctionMarket(security) with BasicMatchingEngine), security.name
)
orderRoutingLogic.addMarket(security, marketForSecurity)
}
Router(orderRoutingLogic)
}
In the snippet above I inject a BasicMatchingEngine into the DoubleAuctionMarket. However I have written a number of different matching engines and I would like to be able to configure the type of matching engine injected into DoubleAuctionMarket in the application configuration file.
Can this level of application configuration be done using typesafe Config and HOCON configuration files?
interesting case. If I understood you right, you want to configure Market actor mixing in some MatchingEngine type specified in config?
Some clarification: you can't simply mix in dynamic type. I mean if you move MatchingEngine type to config - it will be known only at runtime, when config is parsed. And at that time you'll not be able to instantiate new DoubleAuctionMarket(security) with ???SomeClassInstance???. But maybe you could replace inheritance with aggregation. Maybe an instance of MatchingEngine can be passed to Market as parameter?
Now, how to obtain an instance of MatchingEngine from config? In short - Typesafe Config has no parser for FQCN properties, but it's not hard to do it yourself using reflection. This technique is used in many places in Akka. Look here first. provider property set as fqcn string and can be changed to other provider (i.e. RemoteActorRefProvider) in other configurations. Now look at how it's processed to obtain Provider instance. First it's just being read as string here. Then ProviderClass is used to instantiate actual (runtime) provider here. DynamicAccess is a utility helping with reflexive calls. It's not publicly accessible via context.system, but just take a piece of it or instantiate yourself, I don't think it's a big issue.
With some modifications, your code may look:
class Exchange extends Actor {
val orderRoutingLogic = new OrderRoutingLogic()
val matchingEngineClass = context.system.settings.config.getString("stocks.matching-engine")
val matchingEngine = DynamicAccess.createInstance[MatchingEngine](matchingEngineClass)
val router = {
val marketsForSecurities = securities.foreach { security =>
val marketForSecurity = context.actorOf(DoubleAuctionMarket.props(security, matchingEngine))
orderRoutingLogic.addMarket(security, marketForSecurity)
}
Router(orderRoutingLogic)
}
I've moved props to companion object of DoubleAuctionMarket as stated in Recommended Preactices of akka docs. Usage of Props(new Actor()) is dangerous practice.

ScalaTest: Issues with Singleton Object re-initialization

I am testing a parser I have written in Scala using ScalaTest. The parser handles one file at a time and it has a singleton object like following:
class Parser{...}
object Resolver {...}
The test case I have written is somewhat like this
describe("Syntax:") {
val dir = new File("tests\\syntax");
val files = dir.listFiles.filter(
f => """.*\.chalice$""".r.findFirstIn(f.getName).isDefined);
for(inputFile <- files) {
val parser = new Parser();
val c = Resolver.getClass.getConstructor();
c.setAccessible(true);
c.newInstance();
val iserror = errortest(inputFile)
val result = invokeparser(parser,inputFile.getAbsolutePath) //local method
it(inputFile.getName + (if (iserror)" ERR" else " NOERR") ){
if (!iserror) result should be (ResolverSuccess())
else if(result.isInstanceOf[ResolverError]) assert(true)
}
}
}
Now at each iteration the side effects of previous iterations inside the singleton object Resolver are not cleaned up.
Is there any way to specify to scalatest module to re-initialize the singleton objects?
Update: Using Daniel's suggestion, I have updated the code, also added more details.
Update: Apparently it is the Parser which is doing something fishy. At subsequent calls it doesn't discard the previous AST. strange. since this is off topic, I would dig more and probably use a separate thread for the discussion, thanks all for answering
Final Update: The issue was with a singleton object other than Resolver, it was in some other file so I had somehow missed it. I was able to solve this using Daniel Spiewak's reply. It is dirty way to do things but its also the only thing, given my circumstances and also given the fact I am writing a test code, which is not going into production use.
According to the language spec, no, there is no way to recreate singleton objects. However, it is possible to reflectively invoke the constructor of a singleton, which overwrites the internal MODULE$ field which contains the actual singleton value:
object Test
Test.hashCode // => e.g. 779942019
val c = Test.getClass.getConstructor()
c.setAccessible(true)
c.newInstance()
Test.hashCode // => e.g. 1806030550
Now that I've shared the evil secret with you, let me caution you never, ever to do this. I would try very very hard to adjust the code, rather than playing sneaky tricks like this one. However, if things are as you say, and you really do have no other option, this is at least something.
ScalaTest has several ways to let you reinitialize things between tests. However, this particular question is tough to answer without knowing more. The main question would be, what does it take to reinitialize the singleton object? If the singleton object can't be reinitialized without instantiating a new singleton object, then you'd need to make sure each test loaded the singleton object anew, which would require using custom class loaders. I find it hard to believe someone would design something that way, though. Can you update your question with more details like that? I'll take a look again later and see if the extra details makes the answer more obvious.
ScalaTest has a runpath that loads classes anew for each run, but not a testpath. So you'll have to roll your own. The real problem here is that someone has designed this in a way that it is not easily tested. I would look at loading Resolver and Parser with a URLClassLoader inside each test. That way you'd get a new Resolver each test.
You'll need to take Parser & Resolver off of the classpath and off of the runpath. Put them into a directory of their own. Then create a URLClassLoader for each test that points to that directory. Then call findClass("Parser") on that class loader to get it. I'm assuming Parser refers to Resolver, and in that case the JVM will go back to the class loader that loaded Parser to get Resolver, which is your URLClassLoader. Do a newInstance on the Parser to get the instance. That should solve your problem, because you'll get a new Resolver singleton object for each test.
No answer, but I do have a simple example of where you might want to reset the singleton object in order to test the singleton construction in multiple, potential situations. Consider something stupid like the following code. You may want to write tests that validates that an exception is thrown when the environment isn't setup correctly and also write a test validates that an exception does not occur when the environment is not setup correctly. I know, I know everyone says, "Provide a default when the environment isn't setup correctly." but I DO NOT want to do this; it would cause issues because there would be no notification that you're using the wrong system.
object RequiredProperties extends Enumeration {
type RequiredProperties = String
private def getRequiredEnvProp(propName: String) = {
sys.env.get(propName) match {
case None => throw new RuntimeException(s"$propName is required but not found in the environment.")
case Some(x) => x
}
}
val ENVIRONMENT: String = getRequiredEnvProp("ENVIRONMENT")
}
Usage:
Init(RequiredProperties.ENVIRONMENT)
If I provided a default then the user would never know that it wasn't set and defaulted to the dev environment. Or something along these lines.