I'm using scrooge + thrift to generate my server and client code. Everything is working just fine so far.
Here's a simplified example of how I use my client:
private lazy val client =
Thrift.newIface[MyPingService[Future]](s"$host:$port")
def main(args: Array[String]): Unit = {
logger.info("ping!")
client.ping().foreach { _ =>
logger.info("pong!")
// TODO: close client
sys.exit(0)
}
}
Everything is working just fine, but the server complains when the program exits about unclosed connections. I've looked all over but I can't seem to figure out how to close the client instance.
So my question is, how do you close a Finagle thrift client? I feel like I'm missing something obvious.
As far as I know, when you use the automagic Thrift.newIface[Iface] method to create your service, you can't close it, because the only thing that your code knows about the resulting value is that it conforms to Iface. If you need to close it, you can instantiate your client in two steps, creating the Thrift service in one and adapting it to your interface in the other.
Here's how it looks if you're using Scrooge to generate your Thrift interface:
val serviceFactory: ServiceFactory[ThriftClientRequest,Array[Byte]] =
Thrift.newClient(s"$host:$port")
val client: MyPingService[Future] =
new MyPingService.FinagledClient(serviceFactory.toService)
doStuff(client).ensure(serviceFactory.close())
I tried this in the repl, and it worked for me. Here's a lightly-edited transcript:
scala> val serviceFactory = Thrift.newClient(...)
serviceFactory: ServiceFactory[ThriftClientRequest,Array[Byte]] = <function1>
scala> val tweetService = new TweetService.FinagledClient(serviceFactory.toService)
tweetService: TweetService.FinagledClient = TweetService$FinagledClient#20ef6b76
scala> Await.result(tweetService.getTweets(GetTweetsRequest(Seq(20))))
res7: Seq[GetTweetResult] = ... "just setting up my twttr" ...
scala> serviceFactory.close
res8: Future[Unit] = ConstFuture(Return(()))
scala> Await.result(tweetService.getTweets(GetTweetsRequest(Seq(20))))
com.twitter.finagle.ServiceClosedException
This is not too bad, but I hope there's a better way that I don't know yet.
I havent used finagle, but according to Finagle documentation
val product = client().flatMap { service =>
// `service` is checked out from the pool.
service(QueryRequest("SELECT 5*5 AS `product`")) map {
case rs: ResultSet => rs.rows.map(processRow)
case _ => Seq.empty
} ensure {
// put `service` back into the pool.
service.close()
}
}
couldn’t you adopt similar strategy
client.ping().foreach { service =>
logger.info("pong!")
// TODO: close client
service.close()
sys.exit(0)
}
Related
I'm using ZIO for the first time and I started with a boilerplate stub from https://github.com/guizmaii/scala-tapir-http4s-zio/blob/master/src/main/scala/example/HttpApp.scala that uses ZIO version 1.0.0-RC17 to set up and run an http4s Blaze server, including Tapir. That worked out nicely, but later on I tried to update to version 1.0.3 so that I'm using an up-to-date version, but that version is not compatible with the code in this stub. Specifically:
This is the code that defines the server (some unrelated routing lines cut out of the original):
val prog: ZIO[ZEnv, Throwable, Unit] = for {
conf <- ZIO.effect(ApplicationConf.build().orThrow())
_ <- putStrLn(conf.toString)
server = ZIO.runtime[AppEnvironment].flatMap { implicit rts =>
val apiRoutes = new ApiRoutes[AppEnvironment]()
val allTapirRoutes = apiRoutes.getRoutes.foldK
val httpApp: HttpApp[RIO[AppEnvironment, *]] = (allTapirRoutes).orNotFound
val httpAppExtended = Logger.httpApp(logHeaders = true, logBody = true)(httpApp)
BlazeServerBuilder[ZIO[AppEnvironment, Throwable, *]]
.bindHttp(conf.port.port.value, conf.server.value)
.withHttpApp(httpAppExtended)
.withoutBanner
.withSocketKeepAlive(true)
.withTcpNoDelay(true)
.serve
.compile[RIO[AppEnvironment, *], RIO[AppEnvironment, *], ExitCode]
.drain
}
prog <- server.provideSome[ZEnv] { currentEnv =>
new Clock {
override val clock: Clock.Service[Any] = currentEnv.clock
}
}
} yield prog
prog.foldM(h => putStrLn(h.toString).as(1), _ => ZIO.succeed(0))
This is the main body of the run() method. Running this code never results in the app exiting with code 0 because the Blaze server blocks termination, as expected. The problem is this snippet:
prog <- server.provideSome[ZEnv] { currentEnv =>
new Clock {
override val clock: Clock.Service[Any] = currentEnv.clock
}
}
This doesn't work in 1.0.3 because of the introduction of Has[A]. The compiler now complains that you can't inherit from final class Has so you can't invoke a new Clock.
I tried to remedy this by replacing it with
prog = server.provideSomeLayer[ZEnv]
and replacing the exit code ints with ExitCode objects, which made the code compile, but after this the Blaze server did not seem to initialize or prevent termination of the app. It just finished with exit code 0.
Clearly there's something missing here, and I haven't seen any information on the shift from the older environment system to the new system based on Has[A]. How can I fix this boilerplate so that the Blaze server runs again?
If you are interested in a template tapir-zio-http4s project, I suggest using the one from tapir repo: https://github.com/softwaremill/tapir/blob/master/examples/src/main/scala/sttp/tapir/examples/ZioExampleHttp4sServer.scala
It is guaranteed to always compile against the latest Tapir (since it's a part of the project).
Also I personally used it recently. It worked.
I'm new to parallel programming and ZIO, i'm trying to get data from an API, by parallel requests.
import sttp.client._
import zio.{Task, ZIO}
ZIO.foreach(files) { file =>
getData(file)
Task(file.getName)
}
def getData(file: File) = {
val data: String = readData(file)
val request = basicRequest.body(data).post(uri"$url")
.headers(content -> "text", char -> "utf-8")
.response(asString)
implicit val backend: SttpBackend[Identity, Nothing, NothingT] = HttpURLConnectionBackend()
request.send().body
resquest.Response match {
case Success(value) => {
val src = new PrintWriter(new File(filename))
src.write(value.toString)
src.close()
}
case Failure(exception) => log error
}
when i execute the program sequentially, it work as expected,
if i tried to run parallel, by changing ZIO.foreach to ZIO.foreachPar.
The program is terminating prematurely, i get that, i'm missing something basic here,
any help is appreciated to help me figure out the issue.
Generally speaking I wouldn't recommend mixing synchronous blocking code as you have with asynchronous non-blocking code which is the primary role of ZIO. There are some great talks out there on how to effectively use ZIO with the "world" so to speak.
There are two key points I would make, one ZIO lets you manage resources effectively by attaching allocation and finalization steps and two, "effects" we could say are "things which actually interact with the world" should be wrapped in the tightest scope possible*.
So lets go through this example a bit, first of all, I would not suggest using the default Identity backed backend with ZIO, I would recommend using the AsyncHttpClientZioBackend instead.
import sttp.client._
import zio.{Task, ZIO}
import zio.blocking.effectBlocking
import sttp.client.asynchttpclient.zio.AsyncHttpClientZioBackend
// Extract the common elements of the request
val baseRequest = basicRequest.post(uri"$url")
.headers(content -> "text", char -> "utf-8")
.response(asString)
// Produces a writer which is wrapped in a `Managed` allowing it to be properly
// closed after being used
def managedWriter(filename: String): Managed[IOException, PrintWriter] =
ZManaged.fromAutoCloseable(UIO(new PrintWriter(new File(filename))))
// This returns an effect which produces an `SttpBackend`, thus we flatMap over it
// to extract the backend.
val program = AsyncHttpClientZioBackend().flatMap { implicit backend =>
ZIO.foreachPar(files) { file =>
for {
// Wrap the synchronous reading of data in a `Task`, but which allows runs this effect on a "blocking" threadpool instead of blocking the main one.
data <- effectBlocking(readData(file))
// `send` will return a `Task` because it is using the implicit backend in scope
resp <- baseRequest.body(data).send()
// Build the managed writer, then "use" it to produce an effect, at the end of `use` it will automatically close the writer.
_ <- managedWriter("").use(w => Task(w.write(resp.body.toString)))
} yield ()
}
}
At this point you will just have the program which you will need to run using one of the unsafe methods or if you are using a zio.App through the main method.
* Not always possible or convenient, but it is useful because it prevents resource hogging by yielding tasks back to the runtime for scheduling.
When you use a purely functional IO library like ZIO, you must not call any side-effecting functions (like getData) except when calling factory methods like Task.effect or Task.apply.
ZIO.foreach(files) { file =>
Task {
getData(file)
file.getName
}
}
I've been using doobie (cats) to connect to a postgresql database from a scalatra application. Recently I noticed that the app was creating a new connection pool for every transaction. I eventually worked around it - see below, but this approach is quite different from that taken in the 'managing connections' section of the book of doobie, I was hoping someone could confirm whether it is sensible or whether there is a better way of setting up the connection pool.
Here's what I had initially - this works but creates a new connection pool on every connection:
import com.zaxxer.hikari.HikariDataSource
import doobie.hikari.hikaritransactor.HikariTransactor
import doobie.imports._
val pgTransactor = HikariTransactor[IOLite](
"org.postgresql.Driver",
s"jdbc:postgresql://${postgresDBHost}:${postgresDBPort}/${postgresDBName}",
postgresDBUser,
postgresDBPassword
)
// every query goes via this function
def doTransaction[A](update: ConnectionIO[A]): Option[A] = {
val io = for {
xa <- pgTransactor
res <- update.transact(xa) ensuring xa.shutdown
} yield res
io.unsafePerformIO
}
My initial assumption was that the problem was having ensuring xa.shutdown on every request, but removing it results in connections quickly being used up until there are none left.
This was an attempt to fix the problem - enabled me to remove ensuring xa.shutdown, but still resulted in the connection pool being repeatedly opened and closed:
val pgTransactor: HikariTransactor[IOLite] = HikariTransactor[IOLite](
"org.postgresql.Driver",
s"jdbc:postgresql://${postgresDBHost}:${postgresDBPort}/${postgresDBName}",
postgresDBUser,
postgresDBPassword
).unsafePerformIO
def doTransaction[A](update: ConnectionIO[A]): Option[A] = {
val io = update.transact(pgTransactor)
io.unsafePerformIO
}
Finally, I got the desired behaviour by creating a HikariDataSource object and then passing it into the HikariTransactor constructor:
val dataSource = new HikariDataSource()
dataSource.setJdbcUrl(s"jdbc:postgresql://${postgresDBHost}:${postgresDBPort}/${postgresDBName}")
dataSource.setUsername(postgresDBUser)
dataSource.setPassword(postgresDBPassword)
val pgTransactor: HikariTransactor[IOLite] = HikariTransactor[IOLite](dataSource)
def doTransaction[A](update: ConnectionIO[A], operationDescription: String): Option[A] = {
val io = update.transact(pgTransactor)
io.unsafePerformIO
}
You can do something like this:
val xa = HikariTransactor[IOLite](dataSource).unsafePerformIO
and pass it to your repositories.
.transact applies the transaction boundaries, like Slick's .transactionally.
E.g.:
def interactWithDb = {
val q: ConnectionIO[Int] = sql"""..."""
q.transact(xa).unsafePerformIO
}
Yes, the response from Radu gets at the problem. The HikariTransactor (the underlying HikariDataSource really) has internal state so constructing it is a side-effect; and you want to do it once when your program starts and pass it around as needed. So your solution works, just note the side-effect.
Also, as noted, I don't monitor SO … try the Gitter channel or open an issue if you have questions. :-)
Java code:
public static Connection connection = null;
public static void main(String[] args) {
if(connection == null){
connection = ConnectionFactory.createConnection(conf);
}
//using connection object to do something
}
converting to Scala code:
someone tell me use Option[T] to handle null value, but I don't how to use Option[T] well, I think it very troublesome.
Scala code:
var connOpt: Option[Connection] = None
def main(args: Array[String]) {
//check `connOpt` Option
connOpt match {
case Some(connection) => { /* using `connection` to do something-----code1 */ }
case _ => {
val connection = ConnectionFactory.createConnection()
connOpt = Option(connection)
// using `connection` to do something------code2
}
}
}
you can see scala code.
I need check connOpt always when I need use connOpt.
code1 and code2 is the same code, I need write two times, I know I can use function to package code1 or code2, but it's very troublesome.
How to handle this case?
I think you can use the 'fold' method to do what you want like this
var connOpt:Option[Connection] = None
def main(args: Array[String]) {
val connection = connOpt.fold(ConnectionFactory.createConnection()){conn => conn}
//using connection object to do something
}
The logic of code mentioned above is the same as your java code that does not write the 'code1' and 'code2' two times.
Using the following example to explain the usage of 'fold'
obj.fold{/*the obj is null, you can return the value you want*/}{a => a
/*if the obj is Some(value), the 'a' is just the 'value'*/
}
Good luck with you
It depends on your business logic; not so much on the programming language.
Does it make sense to keep your component alive if the connection attempt is unsuccessful?
You might want to "fail early": if you can't connect, either throw an Exception or stop your component. In this case, using Option is an unnecessary abstraction.
If your component should survive if it fails to connect, you could do this:
val maybeConn: Option[Connection] = Option(ConnectionFactory.createConnection(conf))
maybeConn foreach doConnRelatedOperation
doOtherOperationThatDoesNotRequireConn
def doConnRelatedOperation(conn: Connection) = println(conn)
def doOtherOperationThatDoesNotRequireConn = "hello!"
In this code example:
Option(ConnectionFactory.createConnection(conf)) will return either Some(conn) or None (I'm assuming createConnection doesn't throw; if it could then maybe you want to use the Try functor instead)
maybeConn foreach doConnRelatedOperation will either do nothing if maybeConn is None or invoke the side-effecting function doConnRelatedOperation that takes a Connection as an argument if maybeConn is Some(conn).
Note that, in this example, doOtherOperationThatDoesNotRequireConn will execute even if there is no connection.
Finally
Let's say your createConnection function might throw an Exception. In that case, Option is the wrong abstraction. In that case, you may want to do lazy val maybeConn: Try[Connection] = Try(ConnectionFactory.createConnection(conf)) instead (basically Try instead of Option).
Note the beautifulness in that the rest of the code doesn't need to be modified! The doConnRelatedOperation function will only be executed if the connection was created, and the doOtherOperationThatDoesNotRequireConn will be executed after, irrespective of having the connection or not. The exception will not affect your code flow!
Welcome to Scala's Good Parts :)
Using Option instead of null is only one of the things "they say" about scala. Your problem is that you chose to use that one (fairly random) piece of advice, and ignore all the others.
The other piece of advice, more relevant in this case is: avoid using state and mutable vars. You can safely throw out both "code 1" and "code 2", and just do this:
lazy val = ConnectionFactory.createConnection(conf)
val connOpt: Option[Connection] = None
def main(args: Array[String]) {
//check `connOpt` Option
connOpt.orElse(Some(ConnectionFactory.createConnection()).map {
connection =>
// code 1
}
}
After migrating my Play (Scala) app to 2.5.3, some tests of my code using ReactiveMongo that once passed now fail in the setup.
Here is my code using ScalaTest:
def fixture(testMethod: (...) => Any) {
implicit val injector = new ScaldiApplicationBuilder()
.prependModule(new ReactiveMongoModule)
.prependModule(new TestModule)
.buildInj()
def reactiveMongoApi = inject[ReactiveMongoApi]
def collection: BSONCollection = reactiveMongoApi.db.collection[BSONCollection](testCollection)
lazy val id = BSONObjectID.generate
//Error occurs at next line
Await.result(collection.insert(Person(id = id, slug = "test-slug", firstName = "Mickey", lastName = "Mouse")), 10.seconds)
...
}
At the insert line, I get this:
reactivemongo.core.errors.ConnectionNotInitialized: MongoError['Connection is missing metadata (like protocol version, etc.) The connection pool is probably being initialized.']
I have tried a bunch of things like initializing collection with a lazy val instead of def. But nothing has worked.
Any insight into how to get my tests passing again is appreciated.
With thanks to #cchantep, the test runs as expected by replacing this code above:
def collection: BSONCollection = reactiveMongoApi.db.collection[BSONCollection](testCollection)
with this code
def collection: BSONCollection = Await.result(reactiveMongoApi.database.map(_.collection[BSONCollection](testCollection)), 10.seconds)
In other words, reactiveMongoApi.database (along with the appropriate changes because of the Future) is the way to go.