Serving an entire directory tree with spray.io - scala

I would like to translate this JS code to Scala, using spray.io.
How can I translate this line below to Scala using spray.io ?
app.use('/', express.static(path.join(__dirname, 'public')));
In other word, how can I serve an entire directory tree using spray.io ?

As comment above says Spray is deprecated. But directives are similar in akka-http. Here is what you probably need (getFromResourceDirectory in your case)
pathPrefix("docs") {
get {
path("swagger.json") {
getFromResource("swagger.json", ContentTypes.`application/json`)
} ~
(pathEnd | pathSingleSlash) {
redirect("docs/index.html", StatusCodes.TemporaryRedirect)
} ~
getFromResourceDirectory("swagger-ui")
}
}

This serves files (recursively) from the directory ./web/
package com.softwaremill.spray.server
import akka.actor.ActorSystem
import spray.routing.SimpleRoutingApp
object Step1Complete extends App with SimpleRoutingApp {
implicit val actorSystem = ActorSystem()
startServer(interface = "localhost", port = 3300) {
get {
path("hello") {
complete {
"Welcome to Amber Gold!"
}
}
} ~
pathPrefix("web" ) {
getFromDirectory("./web/")
}
}
}

Related

Loader constraint violation when using Dropwizard Histogram metric in Apache Flink

I am running a streaming job on Flink 1.9.1 cluster and trying to get a histogram of values into our Prometheus metric collector. Per recommendation in Flink docs, I used Dropwizard histogram implementation with the Flink-provided wrapper, however when submitting job onto cluster, it crashes with the following traceback:
java.lang.LinkageError: loader constraint violation: when resolving method "org.apache.flink.dropwizard.metrics.DropwizardHistogramWrapper.<init>(Lcom/codahale/metrics/Histogram;)V" the class loader (instance of org/apache/flink/util/ChildFirstClassLoader) of the current class, com/example/foo/metrics/FooMeter, and the class loader (instance of sun/misc/Launcher$AppClassLoader) for the method's defining class, org/apache/flink/dropwizard/metrics/DropwizardHistogramWrapper, have different Class objects for the type com/codahale/metrics/Histogram used in the signature
at com.example.foo.metrics.FooMeter.<init>(FooMeter.scala:11)
at com.example.foo.transform.ValidFoos$.open(ValidFoos.scala:15)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.api.operators.StreamFlatMap.open(StreamFlatMap.java:43)
at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:532)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:396)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
I have found similar error in the mailing list, however using shadowJar plugin in Gradle didn't help.
Is there something I am missing?
Relevant code here:
import com.codahale.metrics.{Histogram, SlidingWindowReservoir}
import org.apache.flink.dropwizard.metrics.DropwizardHistogramWrapper
import org.apache.flink.metrics.{MetricGroup, Histogram => FlinkHistogram}
class FooMeter(metricGroup: MetricGroup, name: String) {
private var histogram: FlinkHistogram = metricGroup.histogram(
name, new DropwizardHistogramWrapper(new Histogram(new SlidingWindowReservoir(500))))
def record(fooValue: Long): Unit = {
histogram.update(fooValue)
}
}
object ValidFoos extends RichFlatMapFunction[Try[FooData], Foo] {
#transient private var fooMeter: FooMeter = _
override def open(parameters: Configuration): Unit = {
fooMeter = new FooMeter(getRuntimeContext.getMetricGroup, "foo_values")
}
override def flatMap(value: Try[FooData], out: Collector[FooData]): Unit = {
Transform.validFoo(value) foreach (foo => {
fooMeter.record(foo.value)
out.collect(foo)
})
}
}
build.gradle:
plugins {
id 'scala'
id 'application'
id 'com.github.johnrengelman.shadow' version '2.0.4'
}
ext {
flinkVersion = "1.9.1"
scalaBinaryVersion = "2.11"
scalaVersion = "2.11.12"
}
dependencies {
implementation(
"org.apache.flink:flink-streaming-scala_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-connector-kafka_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-runtime-web_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-json:${flinkVersion}"
"org.apache.flink:flink-metrics-dropwizard:${flinkVersion}",
"org.scala-lang:scala-library:${scalaVersion}",
)
}
shadowJar {
relocate("org.apache.flink.dropwizard", "com.example.foo.shaded.dropwizard")
relocate("com.codahale", "com.example.foo.shaded.codahale")
}
jar {
zip64 = true
archiveName = rootProject.name + '-all.jar'
manifest {
attributes('Main-Class': 'com.example.foo.Foo')
}
from {
configurations.compileClasspath.collect {
it.isDirectory() ? it : zipTree(it)
}
configurations.runtimeClasspath.collect {
it.isDirectory() ? it : zipTree(it)
}
}
}
Further Info:
Running the code locally works
Flink cluster is custom-compiled with following directory structure:
# find /usr/lib/flink/
/usr/lib/flink/
/usr/lib/flink/plugins
/usr/lib/flink/plugins/flink-metrics-influxdb-1.9.1.jar
/usr/lib/flink/plugins/flink-s3-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-graphite-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-prometheus-1.9.1.jar
/usr/lib/flink/plugins/flink-cep_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-python_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-queryable-state-runtime_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-sql-client_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-slf4j-1.9.1.jar
/usr/lib/flink/plugins/flink-state-processor-api_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-oss-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-statsd-1.9.1.jar
/usr/lib/flink/plugins/flink-swift-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-gelly-scala_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-azure-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-datadog-1.9.1.jar
/usr/lib/flink/plugins/flink-shaded-netty-tcnative-dynamic-2.0.25.Final-7.0.jar
/usr/lib/flink/plugins/flink-s3-fs-presto-1.9.1.jar
/usr/lib/flink/plugins/flink-cep-scala_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-gelly_2.11-1.9.1.jar
/usr/lib/flink/lib
/usr/lib/flink/lib/flink-metrics-influxdb-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-graphite-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-prometheus-1.9.1.jar
/usr/lib/flink/lib/flink-table_2.11-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-slf4j-1.9.1.jar
/usr/lib/flink/lib/log4j-1.2.17.jar
/usr/lib/flink/lib/slf4j-log4j12-1.7.15.jar
/usr/lib/flink/lib/flink-metrics-statsd-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-datadog-1.9.1.jar
/usr/lib/flink/lib/flink-table-blink_2.11-1.9.1.jar
/usr/lib/flink/lib/flink-dist_2.11-1.9.1.jar
/usr/lib/flink/bin/...

Play scala Oauth Twitter example and redirect

The play example for using Oauth and Twitter is show below.
In the Play Framework I am still learning how to use redirects and routes. How would you set up the routes file and the Appliction.scala file to handle this redirect?
Redirect(routes.Application.index).withSession("token" -> t.token, "secret" -> t.secret)
Would the routes be something like this?
GET /index controllers.Application.index(String, String)
Link to Play Framework documentation with the example code http://www.playframework.com/documentation/2.0/ScalaOAuth
object Twitter extends Controller {
val KEY = ConsumerKey("xxxxx", "xxxxx")
val TWITTER = OAuth(ServiceInfo(
"https://api.twitter.com/oauth/request_token",
"https://api.twitter.com/oauth/access_token",
"https://api.twitter.com/oauth/authorize", KEY),
false)
def authenticate = Action { request =>
request.queryString.get("oauth_verifier").flatMap(_.headOption).map { verifier =>
val tokenPair = sessionTokenPair(request).get
// We got the verifier; now get the access token, store it and back to index
TWITTER.retrieveAccessToken(tokenPair, verifier) match {
case Right(t) => {
// We received the authorized tokens in the OAuth object - store it before we proceed
Redirect(routes.Application.index).withSession("token" -> t.token, "secret" -> t.secret)
}
case Left(e) => throw e
}
}.getOrElse(
TWITTER.retrieveRequestToken("http://localhost:9000/auth") match {
case Right(t) => {
// We received the unauthorized tokens in the OAuth object - store it before we proceed
Redirect(TWITTER.redirectUrl(t.token)).withSession("token" -> t.token, "secret" -> t.secret)
}
case Left(e) => throw e
})
}
def sessionTokenPair(implicit request: RequestHeader): Option[RequestToken] = {
for {
token <- request.session.get("token")
secret <- request.session.get("secret")
} yield {
RequestToken(token, secret)
}
}
}
It turned out that the reasons I had so many intermittent problems with routes and redirect was a combination of the versions of play, version of scala and the version of ScalaIDE for Eclipse. Using Play version 2.2.3, scala version 2.10.4 and ScalaIDE version 2.10.x solved the routes and redirect problems.
The following import statements are needed for the Twitter example.
import play.api.libs.oauth.ConsumerKey
import play.api.libs.oauth.ServiceInfo
import play.api.libs.oauth.OAuth
import play.api.libs.oauth.RequestToken
If your route is like this:
GET /index controllers.Application.index(param1:String, param2:String)
Then the reverse route would look like this:
routes.Application.index("p1", "p2")
Which would result in something like this:
/index?param1=p1&param2=p2
Make sure that the documentation you are looking at is of the correct version, for 2.2.x you would need this url: http://www.playframework.com/documentation/2.2.x/ScalaOAuth

When does Play load application.conf?

Is application.conf already loaded when the code in Global.scala is executed? I'm asking because I've tried to read some configuration items from Global.scala and I always get None. Is there any workaround?
In Java it's available beforeStart(Application app) already
public class Global extends GlobalSettings {
public void beforeStart(Application app) {
String secret = Play.application().configuration().getString("application.secret");
play.Logger.debug("Before start secret is: " + secret);
super.beforeStart(app);
}
}
As it's required to i.e. configuring DB connection, most probably Scala works the same way (can't check)
Here below is how to read the configuration just after it has been loaded but before the application actually starts:
import play.api.{Configuration, Mode}
import play.api.GlobalSettings
import java.io.File
import utils.apidocs.InfoHelper
object Global extends GlobalSettings {
override def onLoadConfig(
config: Configuration,
path: File, classloader:
ClassLoader,
mode: Mode.Mode): Configuration = {
InfoHelper.loadApiInfo(config)
config
}
}
And here below, just for your info, is the source of InfoHelper.loadApiInfo – it just loads API info for Swagger UI:
package utils.apidocs
import play.api.Configuration
import com.wordnik.swagger.config._
import com.wordnik.swagger.model._
object InfoHelper {
def loadApiInfo(config: Configuration) = {
config.getString("application.name").map { appName =>
config.getString("application.domain").map { appDomain =>
config.getString("application.emails.apiteam").map { contact =>
val apiInfo = ApiInfo(
title = s"$appName API",
description = s"""
Fantastic application that makes you smile. You can find our
more about $appName at $appDomain.
""",
termsOfServiceUrl = s"//$appDomain/terms",
contact = contact,
license = s"$appName Subscription and Services Agreement",
licenseUrl = s"//$appDomain/license"
)
ConfigFactory.config.info = Some(apiInfo)
}}}
}
}
I hope it helps.

How to test remote (production) Scalatra web service with Specs2?

I'm using Specs2 to test my Scalatra web service.
class APISpec extends ScalatraSpec {
def is = "Simple test" ^
"invalid key should return status 401" ! root401^
addServlet(new APIServlet(),"/*")
def root401 = get("/payments") {
status must_== 401
}
}
This tests the web service locally (localhost). Now I would like to perform the same tests to the production Jetty server. Ideally, I would be able to do this by only changing some URL. Is this possible at all ? Or do I have to write my own (possible duplicate) testing code for the production server?
I don't know how Scalatra manages its URLs but one thing you can do in specs2 is control parameters from the command-line:
class APISpec extends ScalatraSpec with CommandLineArguments { def is = s2"""
Simple test
invalid key should return status 401 $root401
${addServlet(new APIServlet(),s"$baseUrl/*")}
"""
def baseUrl = {
// assuming that you passed 'url www.production.com' on the command line
val args = arguments.commandLine.split(" ")
args.zip(args.drop(1)).find { case (name, value) if name == "url" => value }.
getOrElse("localhost:8080")
}
def root401 = get(s"$baseUrl/payments") {
status must_== 401
}
}

Scalate ResourceNotFoundException in Scalatra

I'm trying the following based on scalatra-sbt.g8:
class FooWeb extends ScalatraServlet with ScalateSupport {
beforeAll { contentType = "text/html" }
get("/") {
templateEngine.layout("/WEB-INF/scalate/templates/hello-scalate.jade")
}
}
but I'm getting the following exception (even though the file exists) - any clues?
Could not load resource: [/WEB-INF/scalate/templates/hello-scalate.jade]; are you sure it's within [null]?
org.fusesource.scalate.util.ResourceNotFoundException: Could not load resource: [/WEB-INF/scalate/templates/hello-scalate.jade]; are you sure it's within [null]?
FWIW, the innermost exception is coming from org.mortbay.jetty.handler.ContextHandler.getResource line 1142: _baseResource==null.
Got an answer from the scalatra mailing list. The problem was that I was starting the Jetty server with:
import org.mortbay.jetty.Server
import org.mortbay.jetty.servlet.{Context,ServletHolder}
val server = new Server(8080)
val root = new Context(server, "/", Context.SESSIONS)
root.addServlet(new ServletHolder(new FooWeb()), "/*")
server.start()
I needed to insert this before start():
root.setResourceBase("src/main/webapp")