I have a multi-module SBT project that writes sources to the unmanagedSourceDirectories during the first module and then subsequently uses the contents during later modules (this process repeats). The project structure looks like this:
root
+ gen - writes content to useA/target/generated-sources
+ useA - writes content to useB/target/generated-sources
+ useB - writes content to useC/target/generated-sources
+ etc....
The unmanagedSourceDirectories are defined as the following
root
+ gen
+ useA - unmanagedSourceDirectories=useA/target/generated-sources
+ useB - unmanagedSourceDirectories=useB/target/generated-sources
+ etc...
Is there any way to achieve what I am looking for?
Related
I am trying to rename the S3 files which basically
copy to target + delete source
But in my case I am able to copy target but not able to delete source properly .
All directory structure remains same without any file ..
also it creates temp files in the main directory .
Do I have to explicitly delete it after renaming ?
Here is my code which renames the files
I have subfolders insode the folder.
val file = fs.globStatus(new Path(outputFileURL + "/*/*"))
for (urlStatus <- file) {
val DataPartitionName = urlStatus.getPath.toString.split("=")(1).split("\\/")(0).toString
val finalFileName = finalPrefix + DataPartitionName + "." + intFileCounter + "." + fileVersion + currentTime + fileExtention
val dest = new Path(mainFileURL + "/" + finalFileName)
fs.rename(urlStatus.getPath, dest)
intFileCounter += 1
}
If you check apache hadoop rename documentation, it says
The core operation of rename()—moving one entry in the filesystem to another ..
So its just moving the files and not renaming . More detail on the link above.
So I guess you will have to explicitly delete the folder name after renaming is complete
Instead of renaming the files only, you could rename the folder as well. That would not require you to explicitly delete the folder name.
I have a project which tree structure looks like this:
+ /Root
- .gitignore
+ - - /Folder A
- - - .gitignore <--- this fellow
+ - - - /bin
- - - - - fileA
+ - - - - fileB
+ - - - /Folder AA
+ - - /FolderB
+ - - - /bin
...
The .gitignore in the root-folder has a lot of rules, among them is to ignore all /bin-folders.
However, in my FolderA I would like everything to stay as it is - all the way down in that folder.
FolderB/bin on the other hand should be ignored.
I know this is possible by adding another .gitignore in FolderA, and let this override the root folder's .gitignore. But I can't remember how.
What should I write in FolderA/.gitignore?
Edit:
In other words: "FolderA is a sacred folder, and must have all files in it stay in it, disregarding what any other .gitignore-files must say"
Override the root .gitignore in /FolderA/.gitignore using the include syntax:
!file_that_should_not_be_ignored
!folder_that_should_not_be_ignored
Solution found:
In the .gitignore file added in the folder needed to be excluded, the following line is added as the only line:
!*/
I'm using sbt-native-packager 1.0.0-M5 to create my docker image. I need to add a file that's not a source file or in the resource folder. My docker commands are as follows:
dockerCommands := Seq(
Cmd("FROM", "myrepo/myImage:1.0.0"),
Cmd("COPY", "test.txt keys/"), // <-- The failing part
Cmd("WORKDIR", "/opt/docker"),
Cmd("RUN", "[\"chown\", \"-R\", \"daemon\", \".\"]"),
Cmd("USER", "daemon"),
ExecCmd("CMD", "echo", "Hello, World from Docker")
)
It fails with: msg="test.txt: no such file or directory"
So after digging around a bit it seems I need to have test.txt in target/docker/stage. Then it works. But how do I get it there automatically? The file is actually in the root folder of the project.
I managed to get it to work by adding the file to mappings in Universal. So for you, you would need something like this:
mappings in Universal += file("test.txt") -> "keys/test.txt"
You won't need the COPY command if you do this, by the way.
Now, I'm not sure if this is going to add this mapping to other sbt-native-packager plugins. I hope a commenter can tell me whether or not this is true, but my intuition is that it will do so, which might be a dealbreaker for you. But any workaround is better than none, right? If you use Build.scala you could maybe use a VM argument to tell sbt whether or not to add this mapping...
You may place all additional files (which must be included in container image) into folder src/universal. Content of that folder will be automatically copied in /opt/app folder within your container image. You don't need any additional configuration. See "Getting started with Universal Packaging" for additional info.
The files located in /src/universal will be available in the runtime directory for the Scala app in the Docker container. This means that if your app has /src/universal/example.txt, then it can be accessed with scala.io.Source.fromFile("./example.txt")
I was able to get this working using dockerPackageMappings:
dockerPackageMappings in Docker += (baseDirectory.value / "docker" / "ssh_config") -> "ssh_config"
dockerCommands := (dockerCommands.value match {
case Seq(from#Cmd("FROM", _), rest#_*) =>
Seq(
from,
Cmd("Add", "ssh_config", "/sbin/.ssh/config")
) ++ rest
})
For sbt-docker plugin, not sbt-native-packager
I was able to add files this way:
For example, to add a file located in src/main/resources/docker/some-file.ext
dockerfile in docker := {
val targetPath = "/usr/app"
// map of (relativeName -> File) of all files in resources/docker dir, for convenience
val dockerFiles = {
val resources = (unmanagedResources in Runtime).value
val dockerFilesDir = resources.find(_.getPath.endsWith("/docker")).get
resources.filter(_.getPath.contains("/docker/")).map(r => dockerFilesDir.toURI.relativize(r.toURI).getPath -> r).toMap
}
new Dockerfile {
from(s"$namespace/$baseImageName:$baseImageTag")
...
add(dockerFiles("some-file.ext"), s"$targetPath/some-file.ext")
...
}
}
You can add an entire directory to a Docker image's file system by first making it available using dockerPackageMappings, and then COPYing it as an additional Docker command.
import NativePackagerHelper._
dockerPackageMappings in Docker ++= directory(baseDirectory.value / ".." / "frontend" )
dockerCommands ++= Seq(
Cmd("COPY", "frontend /opt/frontend"),
)
I'm a little confused on the Scala/SBT documentation for creating Scala tasks. Currently I can run the following from the command line:
sbt ";set target := file(\"$PWD/package/deb-upstart\"); set serverLoading in Debian := com.typesafe.sbt.packager.archetypes.ServerLoader.Upstart; debian:packageBin; set target := file(\"$PWD/package/deb-systemv\"); set serverLoading in Debian := com.typesafe.sbt.packager.archtypes.ServerLoader.SystemV; debian:packageBin; set target := file(\"$PWD/package/rpm-systemd\"); rpm:packageBin"
This resets my target each time to a different directory (deb-upstart, deb-systemv and rpm-systemd) and runs an sbt-native-package task for each of those settings. (Yes, I realizing I'm compiling it three different times; but sbt-native-packager doesn't seems to have a setting for the artifact directory)
This works fine from a bash prompt, but I've been trying to put the same target into jenkins (replacing $PWD with $WORKSPACE) and I can't seem to get the quote escaping correct. I thought it might be easier just to have a task in either by build.sbt or project/Build.scala that runs all three of those tasks, changing out the target variable each time (and replacing $PWD or $TARGET with the full path of the base directory).
I've attempted the following:
lazy val packageAll = taskKey[Unit]("Creates deb-upstart, deb-systenv and rpm-systemd packages")
packageAll := {
target := baseDirectory.value / "package" / "deb-upstart"
serverLoading in Debian := com.typesafe.sbt.packager.archetypes.ServerLoader.Upstart
(packageBin in Debian).value
target := baseDirectory.value / "package" / "deb-systemv"
serverLoading in Debian := com.typesafe.sbt.packager.archetypes.ServerLoader.SystemV
(packageBin in Debian).value
target := baseDirectory.value / "package" / "rpm-systemd"
(packageBin in Rpm).value
}
But the trouble is the .value causes the tasks to get evaluated before my task is even run, so they don't get the new target setting (as stated in this other question: How can I call another task from my SBT task?)
So, I figured this out for you :)
As you already mentioned, combining a multiple tasks in a single one where some of the tasks depend on the same setting, doesn't work out as expected.
Instead we do the following
Create a task for each of our custom steps, e.g. packaging debian for upstart
Define an alias that executes these commands in order
Define tasks
lazy val packageDebianUpstart = taskKey[File]("creates deb-upstart package")
lazy val packageDebianSystemV = taskKey[File]("creates deb-systenv package")
lazy val packageRpmSystemD = taskKey[File]("creates rpm-systenv package")
Example task implementation
The implementation is pretty simple and the same for each of
the tasks.
// don't forget this
import com.typesafe.sbt.packager.archetypes._
packageDebianUpstart := {
// define where you want to put your stuff
val output = baseDirectory.value / "package" / "deb-upstart"
// run task
val debianFile = (packageBin in Debian).value
// place it where you want
IO.move(debianFile, output)
output
}
Define alias
And now compose these tasks into a single alias with
addCommandAlias("packageAll", "; clean " +
"; set serverLoading in Debian := com.typesafe.sbt.packager.archetypes.ServerLoader.SystemV" +
"; packageDebianSystemV " +
"; clean " +
"; set serverLoading in Debian := com.typesafe.sbt.packager.archetypes.ServerLoader.Upstart" +
"; packageDebianUpstart " +
"; packageRpmSystemD")
You can look at the complete code here.
Update
Setting the SystemLoader inside the alias seems to be the right
way to solve this. A clean is unfortunately necessary between
each build for the same output format.
I use universal sbt-native-packager to generate a zip file distribution.
sbt universal:packageBin
The generated zip file, once extracted, contains everything inside a main directory named as my zip file:
unzip my-project-0.0.1.zip
my-project-0.0.1/bin
my-project-0.0.1/lib
my-project-0.0.1/conf
...
How can I create a zip that has no root folder, so that when extracted it will have a structure like that?
bin
lib
conf
Thanks
I'm not confident enough with sbt and scala to submit a pull request.
bash scripting has to be excluded right now, so my current (and ugly) solution is this one:
packageBin in Universal := {
val originalFileName = (packageBin in Universal).value
val (base, ext) = originalFileName.baseAndExt
val newFileName = file(originalFileName.getParent) / (base + "_dist." + ext)
val extractedFiles = IO.unzip(originalFileName,file(originalFileName.getParent))
val mappings: Set[(File, String)] = extractedFiles.map( f => (f, f.getAbsolutePath.substring(originalFileName.getParent.size + base.size + 2)))
val binFiles = mappings.filter{ case (file, path) => path.startsWith("bin/")}
for (f <- binFiles) f._1.setExecutable(true)
ZipHelper.zip(mappings,newFileName)
IO.move(newFileName, originalFileName)
IO.delete(file(originalFileName.getParent + "/" + originalFileName.base))
originalFileName
}
The solution proposed on github seems to be way nicer than mine even tough it doesn't work for me:
https://github.com/sbt/sbt-native-packager/issues/276
Unfortunately it looks like the name of that top-level directory is fixed to be the same as the name of the distributable zip (check out line 24 of the ZipHelper source on GitHub).
So unless you feel like making it configurable and submitting a pull request, it might just be easier to modify the resulting zip on the command line (assuming some flavour of UNIX):
unzip my-project-0.0.1.zip && cd my-project-0.0.1 && zip -r ../new.zip * && cd -
This will create new.zip alongside the existing zipfile - you could then mv it over the top if you like.