how to alter a subproject setting in an sbt-release step - scala

I use the sbt-release plugin for a couple of projects. One step I use is docker:publish for sbt-native-packager to push an image to Docker hub.
sbt-native-packager relies on the dockerUpdateLatest setting to decide whether to update the latest tag. The default is false, and if it is true it will update latest.
For one project, that has no sub projects under root, I am able to use a custom ReleaseStep to change that setting depending on if I am releasing a SNAPSHOT, i.e. I do not want to update the latest tag if the version ends in SNAPSHOT.
lazy val setDockerReleaseSettings = ReleaseStep(action = oldState => {
// dockerUpdateLatest is set to true if the version is not a SNAPSHOT
val extracted = Project.extract(oldState)
val v = extracted.get(Keys.version)
val snap = v.endsWith("SNAPSHOT")
if (!snap) extracted
.appendWithSession(Seq(dockerUpdateLatest := true), oldState)
else oldState
})
The above works for that project.
For the other project, there are multiple projects aggregated under root. I would like to do something like
lazy val setDockerReleaseSettings = ReleaseStep(action = oldState => {
// dockerUpdateLatest is set to true if the version is not a SNAPSHOT
val extracted = Project.extract(oldState)
val v = extracted.get(Keys.version)
val snap = v.endsWith("SNAPSHOT")
if (!snap) extracted
.appendWithSession(Seq(dockerUpdateLatest in api := true, dockerUpdateLatest in portal := true), oldState)
else oldState
})
But it does not seem to work. I also tried dockerUpdateLatest in Global, and dockerUpdateLatest in root to no avail. Any ideas how to alter dockerUpdateLatest in these sub projects?

Was able to solve it with set every dockerUpdateLatest := true. I made a custom ReleaseStep like so
lazy val createSetDockerUpdateLatestCommand = ReleaseStep(action = state => {
// dockerUpdateLatest is set to true if the version is not a SNAPSHOT
val snap = Project.extract(state).get(Keys.version).endsWith("SNAPSHOT")
val setDockerUpdateLatest = if (!snap)
Command.command("setDockerUpdateLatest") {
"set every dockerUpdateLatest := true" ::
_
}
else
Command.command("setDockerUpdateLatest") {
"" ::
_
}
state.copy(definedCommands = state.definedCommands :+ setDockerUpdateLatest)
})
Then I run setDockerUpdateLatest

Related

Download all the files from a s3 bucket using scala

I tried the below code to download one file successfully but unable to download all the list of files
client.getObject(
new GetObjectRequest(bucketName, "TestFolder/TestSubfolder/Psalm/P.txt"),
new File("test.txt"))
Thanks in advance
Update
I tried the below code but getting list of directories ,I want list of files rather
val listObjectsRequest = new ListObjectsRequest().
withBucketName("tivo-hadoop-dev").
withPrefix("prefix").
withDelimiter("/")
client.listObjects(listObjectsRequest).getCommonPrefixes
It's a simple thing but I struggled like any thing before concluding below mentioned answer.
I found a java code and changed to scala accordingly and it worked
val client = new AmazonS3Client(credentials)
val listObjectsRequest = new ListObjectsRequest().
withBucketName("bucket-name").
withPrefix("path/of/dir").
withDelimiter("/")
var objects = client.listObjects(listObjectsRequest);
do {
for (objectSummary <- objects.getObjectSummaries()) {
var key = objectSummary.getKey()
println(key)
var arr=key.split("/")
var file_name = arr(arr.length-1)
client.getObject(
new GetObjectRequest("bucket" , key),
new File("some/path/"+file_name))
}
objects = client.listNextBatchOfObjects(objects);
} while (objects.isTruncated())
Below code is fast and useful especially when you want to download all objects at a specific local directory. It maintains the files under the exact same s3 prefix hierarchy
val xferMgrForAws:TransferManager = TransferManagerBuilder.standard().withS3Client(awsS3Client).build();
var objectListing:ObjectListing = null;
objectListing = awsS3Client.listObjects(awsBucketName, prefix);
val summaries:java.util.List[S3ObjectSummary] = objectListing.getObjectSummaries();
if(summaries.size() > 0) {
val xfer:MultipleFileDownload = xferMgrForAws.downloadDirectory(awsBucketName, prefix, new File(localDirPath));
xfer.waitForCompletion();
println("All files downloaded successfully!")
} else {
println("No object present in the bucket !");
}

How to use Scaldi Conditions to do default binding

I am using Scaldi with Play and Slick in my application.
I need to bind a DatabaseConfig dependency to different configurations depending on some condition.
Mode = Dev => Oracle DB
Mode = UAT => Another Oracle DB
...
Mode = Test => Local H2 DB
No Mode specified => same as Mode = Test
How do I handle the last part? I tried to do the following but it does not work.
val inDevMode = SysPropCondition(name = "mode", value = Some("dev"))
val inTestMode = SysPropCondition(name = "mode", value = Some("test")) or SysPropCondition(name = "mode", value = None)
bind [DatabaseConfig[JdbcProfile]] when (inDevMode) to new DbConfigHelper().getDecryptedConfig("gem2g") destroyWith (_.db.close)
bind [DatabaseConfig[JdbcProfile]] when (inTestMode) to DatabaseConfig.forConfig[JdbcProfile]("h2") destroyWith (_.db.close)
val inTestMode = SysPropCondition(name=MODE, value=Some("test")) or SysPropCondition(name=MODE, value=None) or
Condition(System.getProperty(MODE) == null)

neo4j 3.0 embedded - no nodes

There's sometime I must be missing about neo4j 3.0 embedded. After creating a node, setting some properties, and marking the transaction as success. I then re-open the DB, but there are no nodes in it! What am I missing here? The neo4j documentation is pretty poor.
val graph1 = {
val graphDb = new GraphDatabaseFactory()
.newEmbeddedDatabase(new File("/opt/neo4j/deviceGraphTest" ))
val tx = graphDb.beginTx()
val node = graphDb.createNode()
node.setProperty("name", "kitchen island")
node.setProperty("bulbType", "incandescent")
tx.success()
graphDb.shutdown()
}
val graph2 = {
val graphDb2 = new GraphDatabaseFactory()
.newEmbeddedDatabase(new File("/opt/neo4j/deviceGraphTest" ))
val tx2 = graphDb2.beginTx()
val allNodes = graphDb2.getAllNodes.iterator().toList
allNodes.foreach(node => {
printNode(node)
})
}
The transaction what you have opened has to be closed with the command tx.close() after setting the transaction to state success. I do not know the exact scala syntax but it would be good to put the full block into a try/catch and to finally close the transaction in the finally block.
Here is the documentation for Java: https://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/Transaction.html

How to define routes for static file in scala playframework

I am really stuck in here, i need to know how to gives routes for static file. I have static file in the root folder of project not in public
I tried this
GET /dump.txt controllers.Assets.at(path=".", "dump.txt")
gives me: Compilation error, Identifier expected
code that generates file:
val pw = new PrintWriter(new File("dump.txt"))
val result = Json.parse(qdb.sourceSystem(request.session("role")))
val source_system = (result \ "source_system").get.as[List[String]]
for (x ← source_system) {
try {
// val flagedJsonData = Json.parse(qdb.getSourceLineagesFromDBFlag(x, request.session("role")))
// val flagData = (flagedJsonData \ "flaglist").get.as[String]
val flagData = qdb.getSourceLineagesFromDBFlag(x, request.session("role"))
if (flagData.isEmpty == false) {
val myarray = flagData.split(",")
for (variable ← myarray) {
var dump = variable.split("::") map { y ⇒ "\"%s\"".format(y) } mkString ","
dump = "\"%s\",".format(x) + dump
pw.write(dump + "\n")
}
}
} catch {
case _: Throwable ⇒ println("Data Not Found for " + x)
}
}
pw.close
The Play docs address this well. I am assuming that you are using Play 2.x.x
You will need to add an extra route in your routes.conf file that maps to the special Assets controller. The at method will let you specify which physical file you want it to route to. If it's in your root folder, you shouldn't need to use any path spec.
Assets controller is for the serving "Assets" i.e. public static files.
You are trying to serve not public file that looks like dynamic file (name "dump" looks like you plan to do dumps time to time, so it's not seems to be static).
So, if you sure that your file is "asset" i.e. public and static then you need to put it in to the project "public" folder or subfolder and use controllers.Assets as you describe or as it is documented
In the case if you want to serve not public file that is dynamic, you need to create your own controller. I will show you a little example - you can take it and go:
app/controllers/Static.scala
package controllers
import play.api._
import play.api.mvc._
import scala.io.Source
import play.api.Play.current
class Static extends Controller {
def file(path: String) = Action {
var file = Play.application.getFile(path);
if (file.exists())
Ok(Source.fromFile(file.getCanonicalPath()).mkString);
else
NotFound
}
}
in the routs
GET /build.sbt controllers.Static.file(path="build.sbt")
result in the browser http://localhost:9000/build.sbt
name := """scala-assests-multy"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.6"
libraryDependencies ++= Seq(
jdbc,
cache,
ws,
specs2 % Test
)
resolvers += "scalaz-bintray" at "http://dl.bintray.com/scalaz/releases"
// Play provides two styles of routers, one expects its actions to be injected, the
// other, legacy style, accesses its actions statically.
routesGenerator := InjectedRoutesGenerator
You can take this controller and use it for the serving your dump file with the
GET /dump.txt controllers.Static.file(path="dump.txt")
This is how I bootstrap my SPA.
class Bootstrap extends Controller {
def index = Assets.at("/public", "app/index.html")
}
For SPA static assets, I just make sure they're visible from my routes.
GET / controllers.Bootstrap.index
GET /views/*file controllers.Assets.versioned(path="/public/app/js/views", file: Asset)
GET /*file controllers.Assets.versioned(path="/public/app", file: Asset)

Can I use non volatile external variables in Scala Enumeratee?

I need to group output of my Enumerator in different ZipEntries, based on specific property (providerId), original chartPreparations stream is ordered by providerId, so I can just keep reference to provider, and add a new entry when provider chages
Enumerator.outputStream(os => {
val currentProvider = new AtomicReference[String]()
// Step 1. Creating zipped output file
val zipOs = new ZipOutputStream(os, Charset.forName("UTF8"))
// Step 2. Processing chart preparation Enumerator
val chartProcessingTask = (chartPreparations) run Iteratee.foreach(cp => {
// Step 2.1. Write new entry if needed
if(currentProvider.get() == null || cp.providerId != currentProvider.get()) {
if (currentProvider.get() != null) {
zipOs.write("</body></html>".getBytes(Charset.forName("UTF8")))
}
currentProvider.set(cp.providerId)
zipOs.putNextEntry(new ZipEntry(cp.providerName + ".html"))
zipOs.write(HTML_HEADER)
}
// Step 2.2 Write chart preparation in HTML format
zipOs.write(toHTML(cp).getBytes(Charset.forName("UTF8")))
})
// Step 3. On Complete close stream
chartProcessingTask.onComplete(_ => zipOs.close())
})
Since current provider reference, is changing, during the output, I made it AtomicReference, so that I could handle references from different threads.
Can currentProvider just be a var Option[String], and why?