How to define routes for static file in scala playframework - scala

I am really stuck in here, i need to know how to gives routes for static file. I have static file in the root folder of project not in public
I tried this
GET /dump.txt controllers.Assets.at(path=".", "dump.txt")
gives me: Compilation error, Identifier expected
code that generates file:
val pw = new PrintWriter(new File("dump.txt"))
val result = Json.parse(qdb.sourceSystem(request.session("role")))
val source_system = (result \ "source_system").get.as[List[String]]
for (x ← source_system) {
try {
// val flagedJsonData = Json.parse(qdb.getSourceLineagesFromDBFlag(x, request.session("role")))
// val flagData = (flagedJsonData \ "flaglist").get.as[String]
val flagData = qdb.getSourceLineagesFromDBFlag(x, request.session("role"))
if (flagData.isEmpty == false) {
val myarray = flagData.split(",")
for (variable ← myarray) {
var dump = variable.split("::") map { y ⇒ "\"%s\"".format(y) } mkString ","
dump = "\"%s\",".format(x) + dump
pw.write(dump + "\n")
}
}
} catch {
case _: Throwable ⇒ println("Data Not Found for " + x)
}
}
pw.close

The Play docs address this well. I am assuming that you are using Play 2.x.x
You will need to add an extra route in your routes.conf file that maps to the special Assets controller. The at method will let you specify which physical file you want it to route to. If it's in your root folder, you shouldn't need to use any path spec.

Assets controller is for the serving "Assets" i.e. public static files.
You are trying to serve not public file that looks like dynamic file (name "dump" looks like you plan to do dumps time to time, so it's not seems to be static).
So, if you sure that your file is "asset" i.e. public and static then you need to put it in to the project "public" folder or subfolder and use controllers.Assets as you describe or as it is documented
In the case if you want to serve not public file that is dynamic, you need to create your own controller. I will show you a little example - you can take it and go:
app/controllers/Static.scala
package controllers
import play.api._
import play.api.mvc._
import scala.io.Source
import play.api.Play.current
class Static extends Controller {
def file(path: String) = Action {
var file = Play.application.getFile(path);
if (file.exists())
Ok(Source.fromFile(file.getCanonicalPath()).mkString);
else
NotFound
}
}
in the routs
GET /build.sbt controllers.Static.file(path="build.sbt")
result in the browser http://localhost:9000/build.sbt
name := """scala-assests-multy"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.6"
libraryDependencies ++= Seq(
jdbc,
cache,
ws,
specs2 % Test
)
resolvers += "scalaz-bintray" at "http://dl.bintray.com/scalaz/releases"
// Play provides two styles of routers, one expects its actions to be injected, the
// other, legacy style, accesses its actions statically.
routesGenerator := InjectedRoutesGenerator
You can take this controller and use it for the serving your dump file with the
GET /dump.txt controllers.Static.file(path="dump.txt")

This is how I bootstrap my SPA.
class Bootstrap extends Controller {
def index = Assets.at("/public", "app/index.html")
}
For SPA static assets, I just make sure they're visible from my routes.
GET / controllers.Bootstrap.index
GET /views/*file controllers.Assets.versioned(path="/public/app/js/views", file: Asset)
GET /*file controllers.Assets.versioned(path="/public/app", file: Asset)

Related

How to move files from one S3 bucket directory to another directory in same bucket? Scala/Java

I want to move all files under a directory in my s3 bucket to another directory within the same bucket, using scala.
Here is what I have:
def copyFromInputFilesToArchive(spark: SparkSession) : Unit = {
val sourcePath = new Path("s3a://path-to-source-directory/")
val destPath = new Path("s3a:/path-to-destination-directory/")
val fs = sourcePath.getFileSystem(spark.sparkContext.hadoopConfiguration)
fs.moveFromLocalFile(sourcePath,destPath)
}
I get this error:
fs.copyFromLocalFile returns Wrong FS: s3a:// expected file:///
Error explained
The error you are seeing is because the copyFromLocalFile method is really for moving files from a local filesystem to S3. You are trying to "move" files that are already both in S3.
It is important to note that directories don't really exist in Amazon S3 buckets - The folder/file hierarchy you see is really just key-value metadata attached to the file. All file objects are really sitting in the same big, single level container and that filename key is there to give the illusion of files/folders.
To "move" files in a bucket, what you really need to do is update the filename key with the new path which is really just editing object metadata.
How to do a "move" within a bucket with Scala
To accomplish this, you'd need to copy the original object, assign the new metadata to the copy, and then write it back to S3. In practice, you can copy it and save it to the same object which will overwrite the old version, which acts a lot like an update.
Try something like this (from datahackr):
/**
* Copy object from a key to another in multiparts
*
* #param sourceS3Path S3 object key
* #param targetS3Path S3 object key
* #param fromBucketName bucket name
* #param toBucketName bucket name
*/
#throws(classOf[Exception])
#throws(classOf[AmazonServiceException])
def copyMultipart(sourceS3Path: String, targetS3Path: String, fromS3BucketName: String, toS3BucketName: String) {
// Create a list of ETag objects. You retrieve ETags for each object part uploaded,
// then, after each individual part has been uploaded, pass the list of ETags to
// the request to complete the upload.
var partETags = new ArrayList[PartETag]();
// Initiate the multipart upload.
val initRequest = new InitiateMultipartUploadRequest(toS3BucketName, targetS3Path);
val initResponse = s3client.initiateMultipartUpload(initRequest);
// Get the object size to track the end of the copy operation.
var metadataResult = getS3ObjectMetadata(sourceS3Path, fromS3BucketName);
var objectSize = metadataResult.getContentLength();
// Copy the object using 50 MB parts.
val partSize = (50 * 1024 * 1024) * 1L;
var bytePosition = 0L;
var partNum = 1;
var copyResponses = new ArrayList[CopyPartResult]();
while (bytePosition < objectSize) {
// The last part might be smaller than partSize, so check to make sure
// that lastByte isn't beyond the end of the object.
val lastByte = Math.min(bytePosition + partSize - 1, objectSize - 1);
// Copy this part.
val copyRequest = new CopyPartRequest()
.withSourceBucketName(fromS3BucketName)
.withSourceKey(sourceS3Path)
.withDestinationBucketName(toS3BucketName)
.withDestinationKey(targetS3Path)
.withUploadId(initResponse.getUploadId())
.withFirstByte(bytePosition)
.withLastByte(lastByte)
.withPartNumber(partNum + 1);
partNum += 1;
copyResponses.add(s3client.copyPart(copyRequest));
bytePosition += partSize;
}
// Complete the upload request to concatenate all uploaded parts and make the copied object available.
val completeRequest = new CompleteMultipartUploadRequest(
toS3BucketName,
targetS3Path,
initResponse.getUploadId(),
getETags(copyResponses));
s3client.completeMultipartUpload(completeRequest);
logger.info("Multipart upload complete.");
}
// This is a helper function to construct a list of ETags.
def getETags(responses: java.util.List[CopyPartResult]): ArrayList[PartETag] = {
var etags = new ArrayList[PartETag]();
val it = responses.iterator();
while (it.hasNext()) {
val response = it.next();
etags.add(new PartETag(response.getPartNumber(), response.getETag()));
}
return etags;
}
def moveObject(sourceS3Path: String, targetS3Path: String, fromBucketName: String, toBucketName: String) {
logger.info(s"Moving S3 frile from $sourceS3Path ==> $targetS3Path")
// Get the object size to track the end of the copy operation.
var metadataResult = getS3ObjectMetadata(sourceS3Path, fromBucketName);
var objectSize = metadataResult.getContentLength();
if (objectSize > ALLOWED_OBJECT_SIZE) {
logger.info("Object size is greater than 1GB. Initiating multipart upload.");
copyMultipart(sourceS3Path, targetS3Path, fromBucketName, toBucketName);
} else {
s3client.copyObject(fromBucketName, sourceS3Path, toBucketName, targetS3Path);
}
// Delete source object after successful copy
s3client.deleteObject(fromS3BucketName, sourceS3Path);
}
You will need the AWS Sdk for this.
If you are using AWS Sdk Version 1,
projectDependencies ++= Seq(
"com.amazonaws" % "aws-java-sdk-s3" % "1.12.248"
)
import com.amazonaws.services.s3.transfer.{ Copy, TransferManager, TransferManagerBuilder }
val transferManager: TransferManager =
TransferManagerBuilder.standard().build()
def copyFile(): Unit = {
val copy: Copy =
transferManager.copy(
"source-bucket-name", "source-file-key",
"destination-bucket-name", "destination-file-key"
)
copy.waitForCompletion()
}
If you are using AWS Sdk Version 2
projectDependencies ++= Seq(
"software.amazon.awssdk" % "s3" % "2.17.219",
"software.amazon.awssdk" % "s3-transfer-manager" % "2.17.219-PREVIEW"
)
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.services.s3.model.CopyObjectRequest
import software.amazon.awssdk.transfer.s3.{Copy, CopyRequest, S3ClientConfiguration, S3TransferManager}
// change Region.US_WEST_2 to your required region
// or it might even work without the whole `.region(Region.US_WEST_2)` thing
val s3ClientConfig: S3ClientConfiguration =
S3ClientConfiguration
.builder()
.region(Region.US_WEST_2)
.build()
val s3TransferManager: S3TransferManager =
S3TransferManager.builder().s3ClientConfiguration(s3ClientConfig).build()
def copyFile(): Unit = {
val copyObjectRequest: CopyObjectRequest =
CopyObjectRequest
.builder()
.sourceBucket("source-bucket-name")
.sourceKey("source-file-key")
.destinationBucket("destination-bucket-name")
.destinationKey("destination-file-key")
.build()
val copyRequest: CopyRequest =
CopyRequest
.builder()
.copyObjectRequest(copyObjectRequest)
.build()
val copy: Copy =
s3TransferManager.copy(copyRequest)
copy.completionFuture().get()
}
Keep in mind that you will need the AWS credentials with appropriate permissions for both the source and destination object. For this, You just need to get the credentials and make them available as following environment variables.
export AWS_ACCESS_KEY_ID=your_access_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_access_key
export AWS_SESSION_TOKEN=your_session_token
Also, "source-file-key" and "destination-file-key" should be the full path of the file in the bucket.

How to list all files from resources folder with scala

Assume the following structure in your recources folder:
resources
├─spec_A
| ├─AA
| | ├─file-aev
| | ├─file-oxa
| | ├─…
| | └─file-stl
| ├─BB
| | ├─file-hio
| | ├─file-nht
| | ├─…
| | └─file-22an
| └─…
├─spec_B
| ├─AA
| | ├─file-aev
| | ├─file-oxa
| | ├─…
| | └─file-stl
| ├─BB
| | ├─file-hio
| | ├─file-nht
| | ├─…
| | └─file-22an
| └─…
└─…
The task is to read all files for a given specification spec_X one subfolder by one. For obvious reasons we do not want to have the exact names as string literals to open with Source.fromResource("spec_A/AA/…") for hundreds of files in the code.
Additionally, this solution should of course run inside the development environment, i.e. without being packaged into a jar.
The only option to list files inside a resource folder I found is with nio’s Filesystem concept, as this can load a jar-file as a file system. But this comes with two major downsides:
java.nio uses java Stream API, which I cannot collect from inside scala code: Collectors.toList() cannot be made to compile as it cannot determine the right type.
The filesystem needs different base paths for OS-filesystems and jar-file-based filesystems. So I need to manually differentiate between the two situations testing and jar-based running.
First lazy load the jar-filesystem if needed
private static FileSystem jarFileSystem;
static synchronized private FileSystem getJarFileAsFilesystem(String drg_file_root) throws URISyntaxException, IOException {
if (jarFileSystem == null) {
jarFileSystem = FileSystems.newFileSystem(ConfigFiles.class.getResource(drg_file_root).toURI(), Collections.emptyMap());
}
return jarFileSystem;
}
next do the limbo to figure out whether we are inside the jar or not by checking the protocol of the URL and return a Path. (Protocol inside the jar file will be jar:
static Path getPathForResource(String resourceFolder, String filename) throws IOException, URISyntaxException {
URL url = ConfigFiles.class.getResource(resourceFolder + "/" + filename);
return "file".equals(url.getProtocol())
? Paths.get(url.toURI())
: getJarFileAsFilesystem(resourceFolder).getPath(resourceFolder, filename);
}
And finally list and collect into a java list
static List<Path> listPathsFromResource(String resourceFolder, String subFolder) throws IOException, URISyntaxException {
return Files.list(getPathForResource(resourceFolder, subFolder))
.filter(Files::isRegularFile)
.sorted()
.collect(toList());
}
Only then we can go back do Scala and fetch is
class SpecReader {
def readSpecMessage(spec: String): String = {
List("CN", "DO", "KF")
.flatMap(ConfigFiles.listPathsFromResource(s"/spec_$spec", _).asScala.toSeq)
.flatMap(path ⇒ Source.fromInputStream(Files.newInputStream(path), "UTF-8").getLines())
.reduce(_ + " " + _)
}
}
object Main {
def main(args: Array[String]): Unit = {
System.out.println(new SpecReader().readSpecMessage(args.head))
}
}
I put a running mini project to proof it here: https://github.com/kurellajunior/list-files-from-resource-directory
But of course this is far from optimal. I wanto to elmiminate the two downsides mentioned above so, that
scala files only
no extra testing code in my production library
Here's a function for reading all files from a resource folder. My use case is with small files. Inspired by Jan's answers, but without needing a user-defined collector or messing with Java.
// Helper for reading an individual file.
def readFile(path: Path): String =
Source.fromInputStream(Files.newInputStream(path), "UTF-8").getLines.mkString("\n")
private var jarFS: FileSystem = null; // Static variable for storing a FileSystem. Will be loaded on the first call to getPath.
/**
* Gets a Path object corresponding to an URL.
* #param url The URL could follow the `file:` (usually used in dev) or `jar:` (usually used in prod) rotocols.
* #return A Path object.
*/
def getPath(url: URL): Path = {
if (url.getProtocol == "file")
Paths.get(url.toURI)
else {
// This hacky branch is to handle reading resource files from a jar (where url is jar:...).
val strings = url.toString.split("!")
if (jarFS == null) {
jarFS = FileSystems.newFileSystem(URI.create(strings(0)), Map[String, String]().asJava)
}
jarFS.getPath(strings(1))
}
}
/**
* Given a folder (e.g. "A"), reads all files under the resource folder (e.g. "src/main/resources/A/**") as a Seq[String]. */
* #param folder Relative path to a resource folder under src/main/resources.
* #return A sequence of strings. Each element corresponds to the contents of a single file.
*/
def readFilesFromResource(folder: String): Seq[String] = {
val url = Main.getClass.getResource("/" + folder)
val path = getPath(url)
val ls = Files.list(path)
ls.collect(Collectors.toList()).asScala.map(readFile) // Magic!
}
(not catered to example in question)
Relevant imports:
import java.nio.file._
import scala.collection.JavaConverters._ // Needed for .asScala
import java.net.{URI, URL}
import java.util.stream._
import scala.io.Source
Thanks to #TrebledJ ’s answer, this could be minimized to the following:
class ConfigFiles (val basePath String) {
lazy val jarFileSystem: FileSystem = FileSystems.newFileSystem(getClass.getResource(basePath).toURI, Map[String, String]().asJava);
def listPathsFromResource(folder: String): List[Path] = {
Files.list(getPathForResource(folder))
.filter(p ⇒ Files.isRegularFile(p, Array[LinkOption](): _*))
.sorted.toList.asScala.toList // from Stream to java List to Scala Buffer to scala List
}
private def getPathForResource(filename: String) = {
val url = classOf[ConfigFiles].getResource(basePath + "/" + filename)
if ("file" == url.getProtocol) Paths.get(url.toURI)
else jarFileSystem.getPath(basePath, filename)
}
}
special attention was necessary for the empty setting maps.
checking for the URL protocol seems inevitable. Git updated, PUll requests welcome: https://github.com/kurellajunior/list-files-from-resource-directory

How to reference a nested scala object to dynamically set configs on logback.groovy?

I use typesafe config library to manage my Scala project's configuration. I use the following pattern to organize and load my configurations in a type-safe manner
object AppConfig {
private val config = ConfigFactory.load()
lazy val host: String = config.getString("host")
object Splunk {
private lazy val splunkConf = config.getConfig("splunk")
lazy val index = splunkConf.getString("index")
lazy val token = splunkConf.getString("token")
}
}
I need to inject some of those configs in my logback.groovy. This works perfectly when accessing direct properties of AppConfig : example AppConfig.host(), but not for nested objects: like AppConfig.Splunk.token()
logback.groovy
appender("splunk", HttpEventCollectorLogbackAppender) {
url = "http://someSplunkUrl:8088/services/collector"
host = AppConfig.host()
token = AppConfig.Splunk.token()
index = AppConfig.Splunk.index()
batch_size_count = 1
layout(PatternLayout) {
pattern = "%msg"
}
}
groovy.lang.MissingPropertyException: No such property: Splunk for class: AppConfig
at groovy.lang.MetaClassImpl.invokeStaticMissingProperty(MetaClassImpl.java:1028)
at groovy.lang.MetaClassImpl.getProperty(MetaClassImpl.java:1932)
at groovy.lang.MetaClassImpl.getProperty(MetaClassImpl.java:1908)
at groovy.lang.MetaClassImpl.getProperty(MetaClassImpl.java:3886)
at org.codehaus.groovy.runtime.callsite.ClassMetaClassGetPropertySite.getProperty(ClassMetaClassGetPropertySite.java:50)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:298)
at Script1$_run_closure2.doCall(Script1.groovy:18)
at Script1$_run_closure2.doCall(Script1.groovy)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I ended up referencing the nested objected as below :
def splunkConf = Class.forName("com.util.AppConfig\$Splunk\$")
.getDeclaredConstructor().newInstance()
I could then access the fields successfully :
token = splunkConf."token"()
index = splunkConf."index"()

Simple scala.swing application fails

I am using scalas to run a simple scala.swing application:
#!/usr/bin/env scalas
/***
scalaVersion := "2.12.6"
libraryDependencies += "org.scala-lang.modules" %% "scala-swing" % "2.1.1"
*/
import scala.swing._
object FirstSwingApp extends SimpleSwingApplication {
def top = new MainFrame {
title = "First Swing App"
contents = new Button {
text = "Click me"
}
}
}
This compiles and runs (on OSX 10.14), but there is no visible output; the process just terminates after a few seconds. What have I done wrong?
Judging from this documentation https://www.scala-sbt.org/release/docs/Scripts.html - scalas doesn't work like java -jar ..., that is, it isn't running your main class in some object.
It merely executes code as if it was REPL, so if you want to execute code, run it yourself:
#!/usr/bin/env scalas
/***
scalaVersion := "2.12.6"
libraryDependencies += "org.scala-lang.modules" %% "scala-swing" % "2.1.1"
*/
import scala.swing._
// creates object but doesn't run anything
object FirstSwingApp extends SimpleSwingApplication {
def top = new MainFrame {
title = "First Swing App"
contents = new Button {
text = "Click me"
}
}
}
FirstSwingApp.main(new Array[String](0)) // run main manually, or whatever you prefer

how to alter a subproject setting in an sbt-release step

I use the sbt-release plugin for a couple of projects. One step I use is docker:publish for sbt-native-packager to push an image to Docker hub.
sbt-native-packager relies on the dockerUpdateLatest setting to decide whether to update the latest tag. The default is false, and if it is true it will update latest.
For one project, that has no sub projects under root, I am able to use a custom ReleaseStep to change that setting depending on if I am releasing a SNAPSHOT, i.e. I do not want to update the latest tag if the version ends in SNAPSHOT.
lazy val setDockerReleaseSettings = ReleaseStep(action = oldState => {
// dockerUpdateLatest is set to true if the version is not a SNAPSHOT
val extracted = Project.extract(oldState)
val v = extracted.get(Keys.version)
val snap = v.endsWith("SNAPSHOT")
if (!snap) extracted
.appendWithSession(Seq(dockerUpdateLatest := true), oldState)
else oldState
})
The above works for that project.
For the other project, there are multiple projects aggregated under root. I would like to do something like
lazy val setDockerReleaseSettings = ReleaseStep(action = oldState => {
// dockerUpdateLatest is set to true if the version is not a SNAPSHOT
val extracted = Project.extract(oldState)
val v = extracted.get(Keys.version)
val snap = v.endsWith("SNAPSHOT")
if (!snap) extracted
.appendWithSession(Seq(dockerUpdateLatest in api := true, dockerUpdateLatest in portal := true), oldState)
else oldState
})
But it does not seem to work. I also tried dockerUpdateLatest in Global, and dockerUpdateLatest in root to no avail. Any ideas how to alter dockerUpdateLatest in these sub projects?
Was able to solve it with set every dockerUpdateLatest := true. I made a custom ReleaseStep like so
lazy val createSetDockerUpdateLatestCommand = ReleaseStep(action = state => {
// dockerUpdateLatest is set to true if the version is not a SNAPSHOT
val snap = Project.extract(state).get(Keys.version).endsWith("SNAPSHOT")
val setDockerUpdateLatest = if (!snap)
Command.command("setDockerUpdateLatest") {
"set every dockerUpdateLatest := true" ::
_
}
else
Command.command("setDockerUpdateLatest") {
"" ::
_
}
state.copy(definedCommands = state.definedCommands :+ setDockerUpdateLatest)
})
Then I run setDockerUpdateLatest