Gatling Configuration - scala

Is it possible to programmatically set the gatling.core.directory.data path from the gatling.conf?
I am attempting to read in a CSV that is not the in the default directory.
I have attempted to do;
System.getProperties.setProperty("gatling.core.directory.data",FilePathHelper.getGatlingDataFilePath.getAbsolutePath)
But I still get a null pointer for my file;
val users = csv("user.csv")
Thanks

In the end changing the data path is very easy and to run from code is just as simple!
val props = new GatlingPropertiesBuilder
props.simulationClass(<your runner>)
props.dataDirectory(new File(<your data dir>))
props.resultsDirectory(new File(<your report dir>))
Gatling.fromMap(props.build)

Related

Xtend - saved files contain repeated data

I don't know why when I generate files fsa.generateFile(fileName, finalString) it creates the files fine, but when I clean the project, it doubles the output.
Even if I delete the file, it continues growing.
Is this a code or Eclipse problem?
Thank you.
you store the file content for some reason as a member in the generator and never reset it
val root = resource?.allContents?.head as ProblemSpecification;
s += readFile(path_sigAlloyDeclaration+"sigAlloyDeclaration.txt")
i assume s either should be local to the doGenerate method or be reset at the start
s = ""
val root = resource?.allContents?.head as ProblemSpecification;
s += readFile(path_sigAlloyDeclaration+"sigAlloyDeclaration.txt")

How to convert csv file in S3 bucket to RDD

I'm pretty new with this topic so any help will be much appreciated.
I trying to read a csv file which is stored in a S3 bucket and convert its data to an RDD to work directly with it without the need to create a file locally.
So far I've been able to load the file using AmazonS3ClientBuilder, but the only thing I've got is to have the file content in a S3ObjectInputStream and I'm not able to work with its content.
val bucketName = "bucket-name"
val credentials = new BasicAWSCredentials(
"acessKey",
"secretKey"
);
val s3client = AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withRegion(Regions.US_EAST_2)
.build();
val s3object = s3client.getObject(bucketName, "file-name.csv")
val inputStream = s3object.getObjectContent()
....
I have also tried to use a BufferedSource to work with it but once done, I don't know how to convert it to a dataframe or RDD to work with it.
val myData = Source.fromInputStream(inputStream)
....
You can do it with S3A file system provided in Hadoop-AWS module:
Add this dependency https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws
Either define <property><name>fs.s3.impl</name><value>org.apache.hadoop.fs.s3a.S3AFileSystem</value></property> in core-site.xml or add .config("fs.s3.impl", classOf[S3AFileSystem].getName) to SparkSession builder
Access S3 using spark.read.csv("s3://bucket/key"). If you want the RDD that was asked spark.read.csv("s3://bucket/key").rdd
At the end I was able to get the results I was searching for taking a look at https://gist.github.com/snowindy/d438cb5256f9331f5eec

Passing In Config In Gatling Tests

Noob to Gatling/Scala here.
This might be a bit of a silly question but I haven't been able to find an example of what I am trying to do.
I want to pass in things such as the baseURL, username and passwords for some of my calls. This would change from env to env, so I want to be able to change these values between the envs but still have the same tests in each.
I know we can feed in values but it appears that more for iterating over datasets and not so much for passing in the config values like I have.
Ideally I would like to house this information in a JSON file and not pass it in on the command line, but maybe thats not doable?
Any guidance on this would be awesome.
I have a similar setup and you can use pure scala here .In this scenario you can create an object called Config for eg
object Configuration { var INPUT_PROFILE_FILE_NAME = ""; }
This class can also read a file , I have the below code in the above object
val file = getClass.getResource("data/config.properties").getFile()
val prop = new Properties()
prop.load(new FileInputStream(file));
INPUT_PROFILE_FILE_NAME = prop.getProperty("inputProfileFileName")
Now you can import this object in Gattling Simulation File
val profileName= Configuration.INPUT_PROFILE_FILE_NAME ;
https://docs.scala-lang.org/tutorials/tour/singleton-objects.html.html

Sending email with attachment using scala and Liftweb

This is the first time i am integrating Email service with liftweb
I want to send Email with attachments(Like:- Documents,Images,Pdfs)
my code looking like below
case class CSVFile(bytes: Array[Byte],filename: String = "file.csv",
mime: String = "text/csv; charset=utf8; header=present" )
val attach = CSVFile(fileupload.mkString.getBytes("utf8"))
val body = <p>Please research the enclosed.</p>
val msg = XHTMLPlusImages(body,
PlusImageHolder(attach.filename, attach.mime, attach.bytes))
Mailer.sendMail(
From("vyz#gmail.com"),
Subject(subject(0)),
To(to(0)),
)
this code is taken from LiftCookbook its not working like my requirement
its working but only the Attached file name is coming(file.csv) no data in it(i uploaded this file (gsy.docx))
Best Regards
GSY
You don't specify what type fileupload is, but assuming it is of type net.liftweb.http. FileParamHolder then the issue is that you can't just call mkString and expect it to have any data since there is no data in the object, just a fileStream method for retrieving it (either from disk or memory).
The easiest to accomplish what you want would be to use a ByteArrayInputStream and copy the data to it. I haven't tested it, but the code below should solve your issue. For brevity, it uses Apache IO Commons to copy the streams, but you could just as easily do it natively.
val data = {
val os = new ByteArrayOutputStream()
IOUtils.copy(fileupload.fileStream, os)
os.toByteArray
}
val attach = CSVFile(data)
BTW, you say you are uploading a Word (DOCX) file and expecting it to automatically be CSV when the extension is changed? You will just get a DOCX file with a csv extension unless you actually do some conversion.

Mirth: How to get source file directory from file reader channel

I have a file reader channel picking up an xml document. By default, a file reader channel populates the 'originalFilename' in the channel map, which ony gives me the name of the file, not the full path. Is there any way to get the full path, withouth having to hard code something?
You can get any of the Source reader properties like this:
var sourceFolder = Packages.com.mirth.connect.server.controllers.ChannelController.getInstance().getDeployedChannelById(channelId).getSourceConnector().getProperties().getProperty('host');
I put it up in the Mirth forums with a list of the other properties you can access
http://www.mirthcorp.com/community/forums/showthread.php?t=2210
You could put the directory in a channel deploy script:
globalChannelMap.put("pickupDirectory", "/Mirth/inbox");
then use that map in both your source connector:
${pickupDirectory}
and in another channel script:
function getFileLastModified(fileName) {
var directory = globalChannelMap.get("pickupDirectory").toString();
var fullPath = directory + "/" + fileName;
var file = Packages.java.io.File(fullPath);
var formatter = new Packages.java.text.SimpleDateFormat("yyyyMMddhhmmss");
formatter.setTimeZone(Packages.java.util.TimeZone.getTimeZone("UTC"));
return formatter.format(file.lastModified());
};
Unfortunately, there is no variable or method for retrieving the file's full path. Of course, you probably already know the path, since you would have had to provide it in the Directory field. I experimented with using the preprocessor to store the path in a channel variable, but the Directory field is unable to reference variables. Thus, you're stuck having to hard code the full path everywhere you need it.