How can I get the project path in Scala? - eclipse

I'm trying to read some files from my Scala project, and if I use: java.io.File(".").getCanonicalPath() I find that my current directory is far away from them (exactly where I have installed Scala Eclipse). So how can I change the current directory to the root of my project, or get the path to my project? I really don't want to have an absolute path to my input files.
val PATH = raw"E:\lang\scala\progfun\src\examples\"
def printFileContents(filename: String) {
try {
println("\n" + PATH + filename)
io.Source.fromFile(PATH + filename).getLines.foreach(println)
} catch {
case _:Throwable => println("filename " + filename + " not found")
}
}
val filenames = List("random.txt", "a.txt", "b.txt", "c.txt")
filenames foreach printFileContents

Add your files to src/main/resources/<packageName> where <packageName> is your class package.
Change the line val PATH = getClass.getResource("").getPath

new File(".").getCanonicalPath
will give you the base-path you need

Another workaround is to put the path you need in an user environmental variable, and call it with sys.env (returns exception if failure) or System.getenv (returns null if failure), for example val PATH = sys.env("ScalaProjectPath") but the problem is that if you move the project you have to update the variable, which I didn't want.

Related

Obtain filename with extension on filepath returned by HDFS

I am in the middle of writing a function to obtain the file list inside a particular directory in HDFS. My following code successfully obtain the list
val status = fileSystem.listStatus(new Path("/" + ownerId + folderName ))
status.foreach(x=> println(x.getPath))
In x.getPath, I obtained a list of path
hdfs://localhost:54310/david/12345/account.csv
hdfs://localhost:54310/david/12345/iris.csv
How do I filter the path to obtain the filename account.csv and iris.csv? Note that I am developing in a local environment so there is a possibility that we may get something like below when deploy into remote server.
hdfs://localhost:54310/media/david/12345/iris.csv
which has a deeper path.
This will do, no regex needed
val filepath = x.getPath().toString()
val filename = filepath.substring(filepath.lastIndexOf('/') + 1)
You can use getName
val status = fileSystem.listStatus(new Path("/" + ownerId + folderName ))
status.foreach(x=> println(x.getPath.getName))

Saving screenshots with protractor

I'm attempting to save a screenshot using a generic method in protractor. Two features, it creates the folder if it does not exist and it saves the file (with certain conditions).
export function WriteScreenShot(data: string, filename: string) {
let datetime = moment().format('YYYYMMDD-hhmmss');
filename = `../../../test-reports/${filename}.${datetime}.png`;
let path =filename.substring(0, filename.lastIndexOf('/'));
if (!fs.existsSync(path)) {
fs.mkdirSync(path);
}
let stream = fs.createWriteStream(filename);
stream.write(new Buffer(data, 'base64'));
stream.end();
}
This can be used by calling browser.takeScreenshot().then(png => WriteScreenShot(png, 'login/login-page')); Using this example call, a file will be created, I assumed, in the path relative where my WriteScreenShot method's file resides. But that does not appear to be the case.
For example, when I run my spec test in the spec's folder, the image gets saved in the correct place. But if I run it at the project root, an error is capture. Obviously, this has to do with my relative path reference. How do I capture the project's root directory and build from that so that I can run the test from any directory?
This is a classical directory access error. Let me just explain what is happening to your code -
let path =filename.substring(0, filename.lastIndexOf('/'));
The above line outputs to ../../../test-reports
fs.existsSync checks whether thispath exists or not -
case 1 :(postive flow) Your spec folder is in the same current working directory in which you are trying to create reports folder. When you run your test, the path exists, it generates the test-reports directory & screenshots and your code works fine.
case 2:(negative flow) When you try to run it from the root directory which is the current working directory now, fs.existsSync tries to check the path & the reports folder inside it. If it doesn't exist , fs.mkdirSync tries to create your directories but it would fail as it cannot create multiple directories.
You should be using native path module of nodejs to extract the path instead of using file substring and the mkdirp external module for creating multiple directories.
import * as path from 'path';
let {mkdirp} = require('mkdirp'); // npm i -D mkdirp
export function WriteScreenShot(data: string, filename: string) {
let datetime = moment().format('YYYYMMDD-hhmmss');
filename = `../../../test-reports/${filename}.${datetime}.png`;
let filePath = path.dirname(filename); // output: '../../..' (relative path)
// or
let filePath = path.resolve(__dirname); // output: 'your_root_dir_path' (absolute path)
// or
let filePath = path.resolve('.'); // output: 'your_root_dir_path' (absolute path)
if (!fs.existsSync(filePath )) {
mkdirp.sync(filePath); // creates multiple folders if they don't exist
}
let stream = fs.createWriteStream(filename);
stream.write(new Buffer(data, 'base64'));
stream.end();
}
If you are curious to know the difference btw mkdir & mkdir-p please read this SO thread.

Compute file content hash with Scala

In our app, we are in need to compute file hash, so we can compare if the file was updated later.
The way I am doing it right now is with this little method:
protected[services] def computeMigrationHash(toVersion: Int): String = {
val migrationClassName = MigrationClassNameFormat.format(toVersion, toVersion)
val migrationClass = Class.forName(migrationClassName)
val fileName = migrationClass.getName.replace('.', '/') + ".class"
val resource = getClass.getClassLoader.getResource(fileName)
logger.debug("Migration file - " + resource.getFile)
val file = new File(resource.getFile)
val hc = Files.hash(file, Hashing.md5())
logger.debug("Calculated migration file hash - " + hc.toString)
hc.toString
}
It all works perfectly, until the code get's deployed into different environment and file file is located in a different absolute path. I guess, the hashing take the path into account as well.
What is the best way to calculate some sort of reliable hash of a file content that well produce the same result for as log as the content of a file stays the same?
Thanks,
Having perused the source code https://github.com/google/guava/blob/master/guava/src/com/google/common/io/Files.java - only the file contents are hashed - the path does not come into play.
public static HashCode hash(File file, HashFunction hashFunction) throws IOException {
return asByteSource(file).hash(hashFunction);
}
Therefore you need not worry about locality of the file. Now why you end up with a different hash on a different fs .. maybe you should compare the size/contents to ensure eg no compound eol's were introduced.

How do I get the file/folder path using Apache Commons net FTPS Client

I am using Apache commons ftps client to connect to an ftps server. I have the remote file path which is a directory. This directory contains a tree of sub-directories and files. I want to get the path for each file or folder. Is there any way I can get this property? Or if there is any way I could get the parent folder path, I could concatenate the file name to it.
I am currently using below function to get path and size of all files under a certain directory. It gets all the file in current directory and check if it is a directory or file. If it is a directory call recursively until end of the tree, if it is a file save the path and size. You may not need these "map" things you can edit according to your needs.
Usage:
getServerFiles(ftp,"") // start from root
or
getServerFiles(ftp,"directory_name") // start from given directory
Implementation:
def getServerFiles(ftp: FTPClient, dir: String): Map[String, Double] = {
var fileList = Array[FTPFile]()
var base = ""
if (dir == "") {
fileList = ftp.listFiles
} else {
base = dir + "/"
fileList = ftp.listFiles(dir)
}
fileList.flatMap {
x => if (x.isDirectory) {
getServerFiles(ftp, base + x.getName)
} else {
Map[String, Double](base + x.getName -> x.getSize)
}
}.toMap
}

Potential flaw with SBT's IO.zip method?

I'm working on an SBT plugin where I'd like to zip up a directory. This is possible due to the following method in IO:
def zip(sources: Traversable[(File,String)], outputZip: File): Unit
After tinkering with this method, it seems that simply passing it a directory and expecting the resulting zip file to have the same file & folder structure is wrong.. Passing a directory (empty or otherwise) results in the following:
[error]...:zipper: java.util.zip.ZipException: ZIP file must have at least one entry
Therefore, it appears that the way to get use the zip method is by stepping through the directory and adding each file individually to the Traversable object.
Assuming my understanding is correct, this strikes me as very odd - vey rarely do users need to cherry-pick what is to be added to an archive.
Any thoughts on this?
It seems like you can use this to compose a zip with files from multiple places. I can see the use of that in a build system.
A bit late to the party, but this should do what you need:
val parentFolder: File = ???
val folderName: String = ???
val src: File = parentFolder / folderName
val tgt: File = parentFolder / s"$folderName.zip"
IO.zip(allSubpaths(src), tgt)
Here is some code for zipping directories using sbt's IO class:
IO.withTemporaryDirectory(base => {
val dirToZip = new File(base, "lib")
IO.createDirectory(dirToZip)
IO.write(dirToZip / "test1", "test")
IO.write(dirToZip / "test2", "test")
val zip: File = base / ("test.zip")
IO.zip(allSubpaths(dirToZip), zip)
val out: File = base / "out"
IO.createDirectory(out)
IO.unzip(zip,out) mustEqual(Set(out /"test1", out / "test2"))
IO.delete((out ** "*").get)
//Create a zip containing this lib directory but under a different directory in the zip
val finder: PathFinder = dirToZip ** "*" --- dirToZip //Remove dirToZip as you can't rebase a directory to itself
IO.zip(finder x rebase(dirToZip, "newlib"), base / "rebased.zip")
IO.createDirectory(out)
IO.unzip(base / "rebased.zip",out) mustEqual(Set(out /"newlib"/"test1", out / "newlib"/ "test2"))
})
See the docs
http://www.scala-sbt.org/0.12.2/docs/Detailed-Topics/Mapping-Files.html
http://www.scala-sbt.org/0.12.3/docs/Detailed-Topics/Paths.html
for tips on creating the Traversable object to pass to IO.zip