Is there a way to generate random values inside the session for the same parameter.
Ex json file-
{
"age": "${age}"
},
{
"age": "${age}"
}
val ageFeeder = Iterator.continually(Map ("age" -> (0 + ThreadLocalRandom.current().nextInt(100 - 0) + 1).toString ()))
val scn = scenario("test")
.exec(feed(ageFeeder))
.exec(session => {
// code to read the file using ElFileBody which replaces ${age} with randomly generated age
})
I want to generate random values for the number of times we are calling ${age} in the file.
The feed instruction should not be in one exec I think.
Try:
.feed(ageFeeder)
.exec(session => { ... })
assuming you won't know until execution how many ${age} values you need to replace, you might be better not using a feeder.
instead, you could have a session variable that's a list of all possible numbers, then use the gatling EL to randomly index into that list.
so at the top of your file
val ages: Seq[Int] = (1 to 100).toSeq
then at the start of your scenario, persist this list into the session
exec(session => session.set("ages", ages))
and then in your file, rather than ${ages} you can use
"${ages.random()}"
Related
So I have a large dataset that is a sample of a stackoverflow userbase. One line from this dataset is as follows:
<row Id="42" Reputation="11849" CreationDate="2008-08-01T13:00:11.640" DisplayName="Coincoin" LastAccessDate="2014-01-18T20:32:32.443" WebsiteUrl="" Location="Montreal, Canada" AboutMe="A guy with the attention span of a dead goldfish who has been having a blast in the industry for more than 10 years.
Mostly specialized in game and graphics programming, from custom software 3D renderers to accelerated hardware pipeline programming." Views="648" UpVotes="337" DownVotes="40" Age="35" AccountId="33" />
I would like to extract the number from reputation, in this case it is "11849" and the number from age, in this example it is "35" I would like to have them as floats.
The file is located in a HDFS so it comes in the format RDD
val linesWithAge = lines.filter(line => line.contains("Age=")) //This is filtering data which doesnt have age
val repSplit = linesWithAge.flatMap(line => line.split("\"")) //Here I am trying to split the data where there is a "
so when I split it with quotation marks the reputation is in index 3 and age in index 23 but how do I assign these to a map or a variable so I can use them as floats.
Also I need it to do this for every line on the RDD.
EDIT:
val linesWithAge = lines.filter(line => line.contains("Age=")) //transformations from the original input data
val repSplit = linesWithAge.flatMap(line => line.split("\""))
val withIndex = repSplit.zipWithIndex
val indexKey = withIndex.map{case (k,v) => (v,k)}
val b = indexKey.lookup(3)
println(b)
So if added an index to the array and now I've successfully managed to assign it to a variable but I can only do it to one item in the RDD, does anyone know how I could do it to all items?
What we want to do is to transform each element in the original dataset (represented as an RDD) into a tuple containing (Reputation, Age) as numeric values.
One possible approach is to transform each element of the RDD using String operations in order to extract the values of the elements "Age" and "Reputation", like this:
// define a function to extract the value of an element, given the name
def findElement(src: Array[String], name:String):Option[String] = {
for {
entry <- src.find(_.startsWith(name))
value <- entry.split("\"").lift(1)
} yield value
}
We then use that function to extract the interesting values from every record:
val reputationByAge = lines.flatMap{line =>
val elements = line.split(" ")
for {
age <- findElement(elements, "Age")
rep <- findElement(elements, "Reputation")
} yield (rep.toInt, age.toInt)
}
Note how we don't need to filter on "Age" before doing this. If we process a record that does not have "Age" or "Reputation", findElement will return None. Henceforth the result of the for-comprehension will be None and the record will be flattened by the flatMap operation.
A better way to approach this problem is by realizing that we are dealing with structured XML data. Scala provides built-in support for XML, so we can do this:
import scala.xml.XML
import scala.xml.XML._
// help function to map Strings to Option where empty strings become None
def emptyStrToNone(str:String):Option[String] = if (str.isEmpty) None else Some(str)
val xmlReputationByAge = lines.flatMap{line =>
val record = XML.loadString(line)
for {
rep <- emptyStrToNone((record \ "#Reputation").text)
age <- emptyStrToNone((record \ "#Age").text)
} yield (rep.toInt, age.toInt)
}
This method relies on the structure of the XML record to extract the right attributes. As before, we use the combination of Option values and flatMap to remove records that do not contain all the information we require.
First, you need a function which extracts the value for a given key of your line (getValueForKeyAs[T]), then do:
val rdd = linesWithAge.map(line => (getValueForKeyAs[Float](line,"Age"), getValueForKeyAs[Float](line,"Reputation")))
This should give you an rdd of type RDD[(Float,Float)]
getValueForKeyAs could be implemented like this:
def getValueForKeyAs[A](line:String, key:String) : A = {
val res = line.split(key+"=")
if(res.size==1) throw new RuntimeException(s"no value for key $key")
val value = res(1).split("\"")(1)
return value.asInstanceOf[A]
}
How can I create a simple feeder in Gatling without using a csv file?
I have tried scripts from the Gatling documentation.
I have seen one example in the documentation
val random = new util.Random
val feeder = Iterator.continually(Map("email" -> random.nextString(20) + "#foo.com"))
I don't understand the above code.
I tried a script with a feeder that uses a csv file and was executed successfully. Instead of feeding data from a csv file, how do I write a feeder that can take a defined value.
As the docs state, a Feeder is just an alias for Iterator[Map[String, T]]. You just need to make sure your feeder provides an infinite stream of values as highlighted by Rüdiger Klaehn.
Since you said you already was able to run an example using the builtin csv feeder, let's convert it to our own feeder so it becomes more clear what the above custom feeder code does.
Let's look at the example that comes from the advanced tutorial:
object Search {
val feeder = csv("search.csv").random // 1, 2
val search = exec(http("Home")
.get("/"))
.pause(1)
.feed(feeder) // 3
.exec(http("Search")
.get("/computers?f=${searchCriterion}") // 4
.check(css("a:contains('${searchComputerName}')", "href").saveAs("computerURL"))) // 5
.pause(1)
.exec(http("Select")
.get("${computerURL}")) // 6
.pause(1)
}
This is the part that generates the feed:
val feeder = csv("search.csv").random // 1, 2
And this is the search.csv file:
searchCriterion,searchComputerName
Macbook,MacBook Pro
eee,ASUS Eee PC 1005PE
Let's replace that with our new custom feeder:
/* This is our list of choices, we won't ready from csv anymore */
val availableComputers = List(
Map("searchCriterion" -> "MacBook", "searchComputerName" -> "Macbook Pro"),
Map("searchCriterion" -> "eee", "searchComputerName" -> "ASUS Eee PC 1005PE")
)
/* Everytime we call this method we get a random member of availableComputers */
def pickARandomComputerInfo() = {
availableComputers(Random.nextInt(availableComputers.size))
}
/* Continually means every time you ask feeder for a new input entry,
it will call pickARandomComputerInfo to gerenate an input for you.
So iterating over feeder will never end, you will always get
something */
val feeder = Iterator.continually(pickARandomComputerInfo)
This is harder to see in your provided example, but you could split it to better understand it:
def getRandomEmailInfo() = Map("email" -> random.nextString(20) + "#foo.com")
val feeder = Iterator.continually(getRandomEmailInfo)
suppose these are my data:
‘Maps‘ and ‘Reduces‘ are two phases of solving a query in HDFS.
‘Map’ is responsible to read data from input location.
it will generate a key value pair.
that is, an intermediate output in local machine.
’Reducer’ is responsible to process the intermediate.
output received from the mapper and generate the final output.
and i want to add a number to every line like below output:
1,‘Maps‘ and ‘Reduces‘ are two phases of solving a query in HDFS.
2,‘Map’ is responsible to read data from input location.
3,it will generate a key value pair.
4,that is, an intermediate output in local machine.
5,’Reducer’ is responsible to process the intermediate.
6,output received from the mapper and generate the final output.
save them to file.
i've tried:
object DS_E5 {
def main(args: Array[String]): Unit = {
var i=0
val conf = new SparkConf().setAppName("prep").setMaster("local")
val sc = new SparkContext(conf)
val sample1 = sc.textFile("data.txt")
for(sample<-sample1){
i=i+1
val ss=sample.map(l=>(i,sample))
println(ss)
}
}
}
but its output is like blew :
Vector((1,‘Maps‘ and ‘Reduces‘ are two phases of solving a query in HDFS.))
...
How can i edit my code to generate an output like my favorite output?
zipWithIndex is what you need here. It maps from RDD[T] to RDD[(T, Long)] by adding an index on the second position of the pair.
sample1
.zipWithIndex()
.map { case (line, i) => i.toString + ", " + line }
or using string interpolation (see a comment by #DanielC.Sobral)
sample1
.zipWithIndex()
.map { case (line, i) => s"$i, $line" }
By calling val sample1 = sc.textFile("data.txt") you are creating a new RDD.
If you need just an output, you can try to use next code:
sample1.zipWithIndex().foreach(f => println(f._2 + ", " + f._1))
Basically, by using this code, you will do this:
Using .zipWithIndex() will return new RDD[(T, Long)], where (T, Long) is a Tuple, T is a previous RDD elements datatype (java.lang.String, I believe) and Long is an index of element in RDD.
You performed transformation, now you need to make an action. foreach, in this case, suits very well. What is basically does: it applies your statement to every element in current RDD, so we just call quickly formatted println.
My Query is, read input from a file and convert data lines of the file to List[Map[Int,String]] using scala. Here I give a dataset as the input. My code is,
def id3(attrs: Attributes,
examples: List[Example],
label: Symbol
) : Node = {
level = level+1
// if all the examples have the same label, return a new node with that label
if(examples.forall( x => x(label) == examples(0)(label))){
new Leaf(examples(0)(label))
} else {
for(a <- attrs.keySet-label){ //except label, take all attrs
("Information gain for %s is %f".format(a,
informationGain(a,attrs,examples,label)))
}
// find the best splitting attribute - this is an argmax on a function over the list
var bestAttr:Symbol = argmax(attrs.keySet-label, (x:Symbol) =>
informationGain(x,attrs,examples,label))
// now we produce a new branch, which splits on that node, and recurse down the nodes.
var branch = new Branch(bestAttr)
for(v <- attrs(bestAttr)){
val subset = examples.filter(x=> x(bestAttr)==v)
if(subset.size == 0){
// println(levstr+"Tiny subset!")
// zero subset, we replace with a leaf labelled with the most common label in
// the examples
val m = examples.map(_(label))
val mostCommonLabel = m.toSet.map((x:Symbol) => (x,m.count(_==x))).maxBy(_._2)._1
branch.add(v,new Leaf(mostCommonLabel))
}
else {
// println(levstr+"Branch on %s=%s!".format(bestAttr,v))
branch.add(v,id3(attrs,subset,label))
}
}
level = level-1
branch
}
}
}
object samplet {
def main(args: Array[String]){
var attrs: sample.Attributes = Map()
attrs += ('0 -> Set('abc,'nbv,'zxc))
attrs += ('1 -> Set('def,'ftr,'tyh))
attrs += ('2 -> Set('ghi,'azxc))
attrs += ('3 -> Set('jkl,'fds))
attrs += ('4 -> Set('mno,'nbh))
val examples: List[sample.Example] = List(
Map(
'0 -> 'abc,
'1 -> 'def,
'2 -> 'ghi,
'3 'jkl,
'4 -> 'mno
),
........................
)
// obviously we can't use the label as an attribute, that would be silly!
val label = 'play
println(sample.try(attrs,examples,label).getStr(0))
}
}
But How I change this code to - accepting input from a .csv file?
I suggest you use Java's io / nio standard library to read your CSV file. I think there is no relevant drawback in doing so.
But the first question we need to answer is where to read the file in the code? The parsed input seems to replace the value of examples. This fact also hints us what type the parsed CSV input must have, namely List[Map[Symbol, Symbol]]. So let us declare a new class
class InputFromCsvLoader(charset: Charset = Charset.defaultCharset()) {
def getInput(file: Path): List[Map[Symbol, Symbol]] = ???
}
Note that the Charset is only needed if we must distinguish between differently encoded CSV-files.
Okay, so how do we implement the method? It should do the following:
Create an appropriate input reader
Read all lines
Split each line at the comma-separator
Transform each substring into the symbol it represents
Build a map from from the list of symbols, using the attributes as key
Create and return the list of maps
Or expressed in code:
class InputFromCsvLoader(charset: Charset = Charset.defaultCharset()) {
val Attributes = List('outlook, 'temperature, 'humidity, 'wind, 'play)
val Separator = ","
/** Get the desired input from the CSV file. Does not perform any checks, i.e., there are no guarantees on what happens if the input is malformed. */
def getInput(file: Path): List[Map[Symbol, Symbol]] = {
val reader = Files.newBufferedReader(file, charset)
/* Read the whole file and discard the first line */
inputWithHeader(reader).tail
}
/** Reads all lines in the CSV file using [[java.io.BufferedReader]] There are many ways to do this and this is probably not the prettiest. */
private def inputWithHeader(reader: BufferedReader): List[Map[Symbol, Symbol]] = {
(JavaConversions.asScalaIterator(reader.lines().iterator()) foldLeft Nil.asInstanceOf[List[Map[Symbol, Symbol]]]){
(accumulator, nextLine) =>
parseLine(nextLine) :: accumulator
}.reverse
}
/** Parse an entry. Does not verify the input: If there are less attributes than columns or vice versa, zip creates a list of the size of the shorter list */
private def parseLine(line: String): Map[Symbol, Symbol] = (Attributes zip (line split Separator map parseSymbol)).toMap
/** Create a symbol from a String... we could also check whether the string represents a valid symbol */
private def parseSymbol(symbolAsString: String): Symbol = Symbol(symbolAsString)
}
Caveat: Expecting only valid input, we are certain that the individual symbol representations do not contain the comma-separation character. If this cannot be assumed, then the code as is would fail to split certain valid input strings.
To use this new code, we could change the main-method as follows:
def main(args: Array[String]){
val csvInputFile: Option[Path] = args.headOption map (p => Paths get p)
val examples = (csvInputFile map new InputFromCsvLoader().getInput).getOrElse(exampleInput)
// ... your code
Here, examples uses the value exampleInput, which is the current, hardcoded value of examples if no input argument is specified.
Important: In the code, all error handling has been omitted for convenience. In most cases, errors can occur when reading from files and user input must be considered invalid, so sadly, error handling at the boundaries of your program is usally not optional.
Side-notes:
Try not to use null in your code. Returning Option[T] is a better option than returning null, because it makes "nullness" explicit and provides static safety thanks to the type-system.
The return-keyword is not required in Scala, as the last value of a method is always returned. You can still use the keyword if you find the code more readable or if you want to break in the middle of your method (which is usually a bad idea).
Prefer val over var, because immutable values are much easier to understand than mutable values.
The code will fail with the provided CSV string, because it contains the symbols TRUE and FALSE which are not legal according to your programs logic (they should be true and false instead).
Add all information to your error-messages. Your error message only tells me what that a value for the attribute 'wind is bad, but it does not tell me what the actual value is.
Read a csv file ,
val datalines = Source.fromFile(filepath).getLines()
So this datalines contains all the lines from the csv file.
Next, convert each line into a Map[Int,String]
val datamap = datalines.map{ line =>
line.split(",").zipWithIndex.map{ case (word, idx) => idx -> word}.toMap
}
Here, we split each line with ",". Then construct a map with key as column number and value as each word after the split.
Next, If we want List[Map[Int,String]],
val datamap = datalines.map{ line =>
line.split(",").zipWithIndex.map{ case (word, idx) => idx -> word}.toMap
}.toList
I use the following command to fill an RDD with a bunch of arrays containing 2 strings ["filename", "content"].
Now I want to iterate over every of those occurrences to do something with every filename and content.
val someRDD = sc.wholeTextFiles("hdfs://localhost:8020/user/cloudera/*")
I can't seem to find any documentation on how to do this however.
So what I want is this:
foreach occurrence-in-the-rdd{
//do stuff with the array found on loccation n of the RDD
}
You call various methods on the RDD that accept functions as parameters.
// set up an example -- an RDD of arrays
val sparkConf = new SparkConf().setMaster("local").setAppName("Example")
val sc = new SparkContext(sparkConf)
val testData = Array(Array(1,2,3), Array(4,5,6,7,8))
val testRDD = sc.parallelize(testData, 2)
// Print the RDD of arrays.
testRDD.collect().foreach(a => println(a.size))
// Use map() to create an RDD with the array sizes.
val countRDD = testRDD.map(a => a.size)
// Print the elements of this new RDD.
countRDD.collect().foreach(a => println(a))
// Use filter() to create an RDD with just the longer arrays.
val bigRDD = testRDD.filter(a => a.size > 3)
// Print each remaining array.
bigRDD.collect().foreach(a => {
a.foreach(e => print(e + " "))
println()
})
}
Notice that the functions you write accept a single RDD element as input, and return data of some uniform type, so you create an RDD of the latter type. For example, countRDD is an RDD[Int], while bigRDD is still an RDD[Array[Int]].
It will probably be tempting at some point to write a foreach that modifies some other data, but you should resist for reasons described in this question and answer.
Edit: Don't try to print large RDDs
Several readers have asked about using collect() and println() to see their results, as in the example above. Of course, this only works if you're running in an interactive mode like the Spark REPL (read-eval-print-loop.) It's best to call collect() on the RDD to get a sequential array for orderly printing. But collect() may bring back too much data and in any case too much may be printed. Here are some alternative ways to get insight into your RDDs if they're large:
RDD.take(): This gives you fine control on the number of elements you get but not where they came from -- defined as the "first" ones which is a concept dealt with by various other questions and answers here.
// take() returns an Array so no need to collect()
myHugeRDD.take(20).foreach(a => println(a))
RDD.sample(): This lets you (roughly) control the fraction of results you get, whether sampling uses replacement, and even optionally the random number seed.
// sample() does return an RDD so you may still want to collect()
myHugeRDD.sample(true, 0.01).collect().foreach(a => println(a))
RDD.takeSample(): This is a hybrid: using random sampling that you can control, but both letting you specify the exact number of results and returning an Array.
// takeSample() returns an Array so no need to collect()
myHugeRDD.takeSample(true, 20).foreach(a => println(a))
RDD.count(): Sometimes the best insight comes from how many elements you ended up with -- I often do this first.
println(myHugeRDD.count())
The fundamental operations in Spark are map and filter.
val txtRDD = someRDD filter { case(id, content) => id.endsWith(".txt") }
the txtRDD will now only contain files that have the extension ".txt"
And if you want to word count those files you can say
//split the documents into words in one long list
val words = txtRDD flatMap { case (id,text) => text.split("\\s+") }
// give each word a count of 1
val wordT = words map (x => (x,1))
//sum up the counts for each word
val wordCount = wordsT reduceByKey((a, b) => a + b)
You want to use mapPartitions when you have some expensive initialization you need to perform -- for example, if you want to do Named Entity Recognition with a library like the Stanford coreNLP tools.
Master map, filter, flatMap, and reduce, and you are well on your way to mastering Spark.
I would try making use of a partition mapping function. The code below shows how an entire RDD dataset can be processed in a loop so that each input goes through the very same function. I am afraid I have no knowledge about Scala, so everything I have to offer is java code. However, it should not be very difficult to translate it into scala.
JavaRDD<String> res = file.mapPartitions(new FlatMapFunction <Iterator<String> ,String>(){
#Override
public Iterable<String> call(Iterator <String> t) throws Exception {
ArrayList<String[]> tmpRes = new ArrayList <>();
String[] fillData = new String[2];
fillData[0] = "filename";
fillData[1] = "content";
while(t.hasNext()){
tmpRes.add(fillData);
}
return Arrays.asList(tmpRes);
}
}).cache();
what the wholeTextFiles return is a Pair RDD:
def wholeTextFiles(path: String, minPartitions: Int): RDD[(String, String)]
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.
Here is an example of reading the files at a local path then printing every filename and content.
val conf = new SparkConf().setAppName("scala-test").setMaster("local")
val sc = new SparkContext(conf)
sc.wholeTextFiles("file:///Users/leon/Documents/test/")
.collect
.foreach(t => println(t._1 + ":" + t._2));
the result:
file:/Users/leon/Documents/test/1.txt:{"name":"tom","age":12}
file:/Users/leon/Documents/test/2.txt:{"name":"john","age":22}
file:/Users/leon/Documents/test/3.txt:{"name":"leon","age":18}
or converting the Pair RDD to a RDD first
sc.wholeTextFiles("file:///Users/leon/Documents/test/")
.map(t => t._2)
.collect
.foreach { x => println(x)}
the result:
{"name":"tom","age":12}
{"name":"john","age":22}
{"name":"leon","age":18}
And I think wholeTextFiles is more compliant for small files.
for (element <- YourRDD)
{
// do what you want with element in each iteration, and if you want the index of element, simply use a counter variable in this loop beginning from 0
println (element._1) // this will print all filenames
}