I don't know why when I generate files fsa.generateFile(fileName, finalString) it creates the files fine, but when I clean the project, it doubles the output.
Even if I delete the file, it continues growing.
Is this a code or Eclipse problem?
Thank you.
you store the file content for some reason as a member in the generator and never reset it
val root = resource?.allContents?.head as ProblemSpecification;
s += readFile(path_sigAlloyDeclaration+"sigAlloyDeclaration.txt")
i assume s either should be local to the doGenerate method or be reset at the start
s = ""
val root = resource?.allContents?.head as ProblemSpecification;
s += readFile(path_sigAlloyDeclaration+"sigAlloyDeclaration.txt")
Related
I am currently writing tests for a function that takes file paths and loads a dataset from them. I am not able to change the function. To test it currently I am creating files for each run of the test function. I am worried that simply making files and then deleting them is a bad practice. Is there a better way to create temporary test files in Scala?
import java.io.{File, PrintWriter}
val testFile = new File("src/main/resources/temp.txt" )
val pw = new PrintWriter(testFile)
val testLines = List("this is a text line", "this is the next text line")
testLines.foreach(pw.write)
pw.close
// test logic here
testFile.delete()
I would generally prefer java.nio over java.io. You can create a temporary file like so:
import java.nio.Files
Files.createTempFile()
You can delete it using Files.delete. To ensure that the file is deleted even in the case of an error, you should put the delete call into a finally block.
I'm pretty new to Matlab, and I'm looking for possibly a way to open a file call data.txt from several subfolders of 2414A,2443A,6732A,4577A... and so on, without overwriting on top of each other. All of them are in one giant folder, just within different subfolders.
My question is, instead of changing the folder name every time I open the data.txtand setting a variable for each of the txt file, is there a quicker way to do so? Because my end goal is to concatenate all of the data.txt matrcies for computation.
I currently just have:
cd C:\User\Aisk_000\Desktop\A\NC\Subjects\2414A\
NC1 = dlmread('data.txt');
cd ../2443A\
NC2 = dlmread('data.xt');
cd ../6732A\
...etc. It definitely serves the job, though.
As simple as this:
files = dir('C:\User\Aisk_000\Desktop\A\NC\Subjects\*\data.txt');
files_num = numel(files);
files_data = cell(files_num,1);
for i = 1:files_num
file = files(i);
file_path = fullfile(file.folder,file.name);
files_data{i} = dlmread(file_path);
end
If you want to build up a simple indexing system, use this code instead:
files = dir('C:\Users\Zarathos\Desktop\*\data.txt');
files_num = numel(files);
files_data = cell(files_num,2);
for i = 1:files_num
file = files(i);
file_folder_idx = strsplit(file.folder,'\');
file_folder_idx = file_folder_idx{end};
file_path = fullfile(file.folder,file.name);
files_data{i,1} = file_folder_idx;
files_data{i,2} = dlmread(file_path);
end
So if you have to save your files back to disk after they have been modified, you will be able to rebuild the structure of your C:\User\Aisk_000\Desktop\A\NC\Subjects\ folder and know in which path you have to save the file data currently being processed.
Is it possible to programmatically set the gatling.core.directory.data path from the gatling.conf?
I am attempting to read in a CSV that is not the in the default directory.
I have attempted to do;
System.getProperties.setProperty("gatling.core.directory.data",FilePathHelper.getGatlingDataFilePath.getAbsolutePath)
But I still get a null pointer for my file;
val users = csv("user.csv")
Thanks
In the end changing the data path is very easy and to run from code is just as simple!
val props = new GatlingPropertiesBuilder
props.simulationClass(<your runner>)
props.dataDirectory(new File(<your data dir>))
props.resultsDirectory(new File(<your report dir>))
Gatling.fromMap(props.build)
I am receiving the streaming data myDStream (DStream[String]) that I want to save in S3 (basically, for this question, it doesn't matter where exactly do I want to save the outputs, but I am mentioning it just in case).
The following code works well, but it saves folders with the names like jsonFile-19-45-46.json, and then inside the folders it saves files _SUCCESS and part-00000.
Is it possible to save each RDD[String] (these are JSON strings) data into the JSON files, not the folders? I thought that repartition(1) had to make this trick, but it didn't.
myDStream.foreachRDD { rdd =>
// datetimeString = ....
rdd.repartition(1).saveAsTextFile("s3n://mybucket/keys/jsonFile-"+datetimeString+".json")
}
AFAIK there is no option to save it as a file. Because it's a distributed processing framework and it's not a good practice write on single file rather than each partition writes it's own files in the specified path.
We can pass only output directory where we wanted to save the data. OutputWriter will create file(s)(depends on partitions) inside specified path with part- file name prefix.
As an alternative to rdd.collect.mkString("\n") you can use hadoop Filesystem library to cleanup output by moving part-00000 file into it's place. Below code works perfectly on local filesystem and HDFS, but I'm unable to test it with S3:
val outputPath = "path/to/some/file.json"
rdd.saveAsTextFile(outputPath + "-tmp")
import org.apache.hadoop.fs.Path
val fs = org.apache.hadoop.fs.FileSystem.get(spark.sparkContext.hadoopConfiguration)
fs.rename(new Path(outputPath + "-tmp/part-00000"), new Path(outputPath))
fs.delete(new Path(outputPath + "-tmp"), true)
For JAVA I implemented this one. Hope it helps:
val fs = FileSystem.get(spark.sparkContext().hadoopConfiguration());
File dir = new File(System.getProperty("user.dir") + "/my.csv/");
File[] files = dir.listFiles((d, name) -> name.endsWith(".csv"));
fs.rename(new Path(files[0].toURI()), new Path(System.getProperty("user.dir") + "/csvDirectory/newData.csv"));
fs.delete(new Path(System.getProperty("user.dir") + "/my.csv/"), true);
I am using SharpSvn in my C# project. I am having a folder with some text files in it and another folders with subfolders in it. I am adding the folder under version control and commit it. So far, so good. This is my code:
using (SvnClient client = new SvnClient())
{
SvnCommitArgs ca = new SvnCommitArgs();
SvnStatusArgs sa = new SvnStatusArgs();
sa.Depth = SvnDepth.Empty;
Collection<SvnStatusEventArgs> statuses;
client.GetStatus(pathsConfigurator.FranchisePath, sa, out statuses);
if (statuses.Count == 1)
{
if (!statuses[0].Versioned)
{
client.Add(pathsConfigurator.FranchisePath);
ca.LogMessage = "Added";
client.Commit(pathsConfigurator.FranchisePath, ca);
}
else if (statuses[0].Modified)
{
ca.LogMessage = "Modified";
client.Commit(pathsConfigurator.FranchisePath, ca);
}
}
}
I make some changes to one of the text files and then run the code againg. The modification is not committed. This condition is false:
if (statuses.Count == 1)
and all the logic in the if block does not execute.
I have not written this logic and cannot quite get this lines of code:
client.GetStatus(pathsConfigurator.FranchisePath, sa, out statuses);
if (statuses.Count == 1) {}
I went on the oficial site of the API but couldn`t find documentation about this.
Can someone that is more familiar with this API tell what these two lines do ?
What changes need to be done to this code so if some of the contents of the pathsConfigurator.FranchisePath folder are changed, the whole folder with the changes to be commited. Any suggestions with working example will be greatly appreciated.
Committing one directory with everything inside is pretty easy: Just call commit on that directory.
The default Depth of commit is SvnDepth.Infinity so that would work directly. You can set additional options by providing a SvnCommitArgs object to SvnClient.Commit()