Unable to pass dynamic and unique date values in JMeter - date
I have a request payload(JSON format) which has an array with 1000 objects and each object has 6 key value pairs out of which 5 I’m reading from the csv file using parameterization and the 6th key has to be a unique date value of a future date for each of the object in the array.
I tried this with time-shift function which works for 1 iteration but I want to execute it for n- number of iterations.
I checked for groovy code for this but I have no knowledge of groovy and have started learning it.
How can I achieve this in JMeter.
Also, on reading time-shift function from HTTP Request Defaults-Parameters or from the Test Plan-User Defined Variables it does not read different date for each object, it duplicates same date of the first variable in each object.
{
“deviceNumber": “XX”,
“array: [
{
“keyValue1: “${value1_ReadFromCSV}”,
"keyValue2”: “${value2_ReadFromCSV}”,
"keyValue3”: “${value3_ReadFromCSV}”,
"keyValue4”: “${value4_ReadFromCSV}”,
"keyValue5”: “${value5_ReadFromCSV}”,
"keyValue6”: "2020-05-23” (Should be dynamically generated)
},
{
“keyValue7: “value7_ReadFromCSV”,
"keyValue8”: "value8_ReadFromCSV",
"keyValue9”: "value9_ReadFromCSV",
"keyValue10”: "value10_ReadFromCSV",
"keyValue11”: "value11_ReadFromCSV",
"keyValue12”: "2020-05-24” (Should be dynamically generated)
},
.
.
.
.
{
“keyValue995: “value995_ReadFromCSV”,
"keyValue996”: "value996_ReadFromCSV",
"keyValue997”: "value997_ReadFromCSV",
"keyValue998”: "value998_ReadFromCSV",
"keyValue999”: "value999_ReadFromCSV",
"keyValue1000”: "2025–12-31” (Should be dynamically generated)
}
]
}
I have got the partial solution to this, by reading the csv file line by line and storing each line into a variable using groovy. However, I don't want to store directly the line into the variable but to create a JSON object like above from each line of csv file with a unique future date for each object which is in the array.
The csv file is : (Note: I have removed column for date column in csv as I no longer need it.)
deviceNumber,keyValue1,keyValue2,keyValue3,keyValue4,keyValue5,keyValue7,keyValue8,keyValue9,keyValue10,keyValue11,keyValue12,keyValue13,keyValue15,keyValue15,keyValue16
01,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring
02,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring
03,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring
.
.
.
1000,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring
Kindly suggest any reference/example to do this.
I provide only generic instructions:
You can dynamically construct request body using JSR223 PreProcessor
You can read CSV file into memory using File.readLines() function
You can build JSON out of the values from the CSV file using JsonBuilder class
More information:
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It
Related
Write Extracted values to csv or text Gatling
I am trying to write extracted values from session to the csv file in a scala gatling script , I am extracting multiple fields and need to write data to multiple columns in csv . Can someone please suggest how to implement this.
You can use session to first save all the required values and later save it to the file via exec call: exec( ... .check(regex("data(.*?)\"").findAll.saveAs("userIds"))) ) .exec( session => { scala.reflect.io.File("data.csv").appendAll(session("userIds").as[Seq[String]].mkString(",")+"\n") session }) in example above I assume you get a list of userId and you write it in a single line. You can check more advanced example for writing headers & rows here
Load more records from Gatling feeder
I would like to inject n-rows from my csv file to Gatling feeder. The default approach of Gatling is to read and inject one row at a time. However, I cannot find anywhere, how to take and inject an eg. Array into a template. I came up with creating a JSON template with Gatling Expressions as some of the fields. The issue is I have a JSON array with N-elements: [ {"myKey": ${value}, "mySecondKey": ${value2}, ...}, {"myKey": ${value}, "mySecondKey": ${value2}, ...}, {"myKey": ${value}, "mySecondKey": ${value2}, ...}, {"myKey": ${value}, "mySecondKey": ${value2}, ...} ] And my csv: value,value2,... value,value2,... value,value2,... value,value2,... ... I would like to make it as efficient as possible. My data is in CSV file, so I would like to use csv feeder. Also, the size is large, so readRecords is not possible, since I'm getting out of memory. Is there a way I can put N-records into the request body using Gatling?
From the documentation: feed(feeder, 2) Old Gatling versions: Attribute names, will be suffixed. For example, if the columns are name “foo” and “bar” and you’re feeding 2 records at once, you’ll get “foo1”, “bar1”, “foo2” and “bar2” session attributes. Modern Gatling versions: values will be arrays containing all the values of the same key. In this latter case, you can access a value at a given index with Gatling EL: #{foo(0)}, #{foo(1)}, #{bar(0)} and #{bar(1)}
It seems that the documentation on this front might have changed a bit since then: It’s also possible to feed multiple records at once. In this case, values will be arrays containing all the values of the same key. I personally wrote this in Java, but it is easy to find the syntax for scala as well in the documentation. The solution I used for my CSV file is to add the feeder to the scenario like: .feed(CoreDsl.csv("pathofyourcsvfile"), NUMBER_OF_RECORDS) To apply/receive that array data during your .exec you can do something like this: .post("YourEndpointPath").body(StringBody(session -> yourMethod(session.get(YourStringKey)))) In this case, I am using a POST and requestBody, but the concept remains similar for GET and their corresponding queryParameters. So basically, you can use the session lambda in combination with the session.get method. "yourMethod" can then receive this parameter as an Object[].
Use CSV values in JMeter as request path
I have one of jmeter User defined variable as a "comma separated value" - ${countries} = IN,US,CA,ALL . (I was first trying to get it as a list/array - [IN,US,CA,ALL] ) I want to use the variable to test a web service - GET /${country}/info . IS it possible using ForEach controller or Loop controller ? Only thing is that I want to save it or read it as IN,US,..,ALL and use it in the request path. Thanks
The CSV should be as per the format mentioned in the image attached. Refer to the link on how to use CSV in Jmeter: http://ivetetecedor.com/how-to-use-a-csv-file-with-jmeter/ Thread Group Settings No. of threads: 1 Ramp-up period: 1 Loop Count: 4 Hope this will help.
CSV config is a red herring, you don't need it. You can use a regular expression extractor to split up the variable into another variable (eg MyVar), using something like: (.+?)[,\n] This is trying to match each item before a , or newline. It will place the values in variables like MyVar_1, MyVar_2, etc. This is as close to an array as JMeter understands natively. You can then loop on the contents of the matches using MyVar_matchNr, and MyVar_1 to MyVar_n (you will need to use __V() function to access the 'array' contents.
Generate XML from XML schema xsd in 4GL Progess OpenEdge?
iam using 4GL in Progress OpenEdge 11.3 and i want to write a xml file from xsd schema file. Can i generate a xml file from a XML Schema (xsd) with 4GL Progress OpenEdge? thanks.
Well, you can use a method called READ-XMLSCHEMA (and it's counterpart WRITE-XMLSCHEMA). These can be applied to both TEMP-TABLES and ProDataSets (depending of the complexity of the xml). The ProDataSet documentation, found here, contains quite a lot information about this. There's also a book called Working with XML that can help you. This is the basic syntax of READ-XMLSCHEMA (when working with datasets): READ-XMLSCHEMA ( source-type, { file | memptr | handle | longchar }, override-default-mapping [, field-type-mapping [, verify-schema-mode ] ] ). A basic example would be: DATASET ds:READ-XMLSCHEMA("file", "c:\temp\file.xsd", FALSE). However since you need to work with the actual XML you also will have to handle data. That data is handled in the TEMP-TABLES contained withing the Dataset. It might be easier to start with creating a static ProDataSet that corresponds to the schema and then handle it's data whatever way you want.
GATLING Scala feeder into file
I am trying to feed the values of a feeder that supplies Id's into a .txt File. Is their any way to extract values directly from the feeder without having to extract the Id from each session?
I not sure, what do you mean, but the way for extract values from feed you can use next : val creditCard = "creditCard" feed(tsv("CreditCard.txt").random) Inside file "Credit.txt" you should have 1st line (column) name exactly as init value of variable -> "creditCard". In this way you can use it like : "${creditCard}" in you script.