I would like to inject n-rows from my csv file to Gatling feeder. The default approach of Gatling is to read and inject one row at a time. However, I cannot find anywhere, how to take and inject an eg. Array into a template.
I came up with creating a JSON template with Gatling Expressions as some of the fields. The issue is I have a JSON array with N-elements:
[
{"myKey": ${value}, "mySecondKey": ${value2}, ...},
{"myKey": ${value}, "mySecondKey": ${value2}, ...},
{"myKey": ${value}, "mySecondKey": ${value2}, ...},
{"myKey": ${value}, "mySecondKey": ${value2}, ...}
]
And my csv:
value,value2,...
value,value2,...
value,value2,...
value,value2,...
...
I would like to make it as efficient as possible. My data is in CSV file, so I would like to use csv feeder. Also, the size is large, so readRecords is not possible, since I'm getting out of memory.
Is there a way I can put N-records into the request body using Gatling?
From the documentation:
feed(feeder, 2)
Old Gatling versions:
Attribute names, will be suffixed. For example, if the columns are name “foo” and “bar” and you’re feeding 2 records at once, you’ll get “foo1”, “bar1”, “foo2” and “bar2” session attributes.
Modern Gatling versions:
values will be arrays containing all the values of the same key.
In this latter case, you can access a value at a given index with Gatling EL: #{foo(0)}, #{foo(1)}, #{bar(0)} and #{bar(1)}
It seems that the documentation on this front might have changed a bit since then:
It’s also possible to feed multiple records at once. In this case, values will be arrays containing all the values of the same key.
I personally wrote this in Java, but it is easy to find the syntax for scala as well in the documentation.
The solution I used for my CSV file is to add the feeder to the scenario like:
.feed(CoreDsl.csv("pathofyourcsvfile"), NUMBER_OF_RECORDS)
To apply/receive that array data during your .exec you can do something like this:
.post("YourEndpointPath").body(StringBody(session -> yourMethod(session.get(YourStringKey))))
In this case, I am using a POST and requestBody, but the concept remains similar for GET and their corresponding queryParameters. So basically, you can use the session lambda in combination with the session.get method.
"yourMethod" can then receive this parameter as an Object[].
Related
I have my ADF pipeline, Where my final output from set variable activity is something like this {name:test, value:1234},
The input coming to this variable is
{
"variableName": "test",
"value": "test:1234"
}
The expression provided in Set variable Item column is #item().ColumnName. And the ColumnName in my JSon file is something like this "ColumnName":"test:1234"
How can I change it so that I get only 1234. I am only interested in the value coming here.
It looks like you need to split the value by colon which you can do using Azure Data Factory (ADF) expressions and functions: the split function, which splits a string into an array and the last function to get the last item from the array. This works quite neatly in this case:
#last(split(variables('varWorking'), ':'))
Sample results:
Change the variable name to suit your case. You can also use string methods like lastIndexOf to locate the colon, and grab the rest of the string from there. A sample expression would be something like this:
#substring(variables('varWorking'),add(indexof(variables('varWorking'), ':'),1),4)
It's a bit more complicated but may work for you, depending on the requirement.
It seems like you are using it inside of an iterator since you got item but however, I tried with a simple json lookup value
#last(split(activity('Lookup').output.value[0].ColumnName,':'))
I have a request payload(JSON format) which has an array with 1000 objects and each object has 6 key value pairs out of which 5 I’m reading from the csv file using parameterization and the 6th key has to be a unique date value of a future date for each of the object in the array.
I tried this with time-shift function which works for 1 iteration but I want to execute it for n- number of iterations.
I checked for groovy code for this but I have no knowledge of groovy and have started learning it.
How can I achieve this in JMeter.
Also, on reading time-shift function from HTTP Request Defaults-Parameters or from the Test Plan-User Defined Variables it does not read different date for each object, it duplicates same date of the first variable in each object.
{
“deviceNumber": “XX”,
“array: [
{
“keyValue1: “${value1_ReadFromCSV}”,
"keyValue2”: “${value2_ReadFromCSV}”,
"keyValue3”: “${value3_ReadFromCSV}”,
"keyValue4”: “${value4_ReadFromCSV}”,
"keyValue5”: “${value5_ReadFromCSV}”,
"keyValue6”: "2020-05-23” (Should be dynamically generated)
},
{
“keyValue7: “value7_ReadFromCSV”,
"keyValue8”: "value8_ReadFromCSV",
"keyValue9”: "value9_ReadFromCSV",
"keyValue10”: "value10_ReadFromCSV",
"keyValue11”: "value11_ReadFromCSV",
"keyValue12”: "2020-05-24” (Should be dynamically generated)
},
.
.
.
.
{
“keyValue995: “value995_ReadFromCSV”,
"keyValue996”: "value996_ReadFromCSV",
"keyValue997”: "value997_ReadFromCSV",
"keyValue998”: "value998_ReadFromCSV",
"keyValue999”: "value999_ReadFromCSV",
"keyValue1000”: "2025–12-31” (Should be dynamically generated)
}
]
}
I have got the partial solution to this, by reading the csv file line by line and storing each line into a variable using groovy. However, I don't want to store directly the line into the variable but to create a JSON object like above from each line of csv file with a unique future date for each object which is in the array.
The csv file is : (Note: I have removed column for date column in csv as I no longer need it.)
deviceNumber,keyValue1,keyValue2,keyValue3,keyValue4,keyValue5,keyValue7,keyValue8,keyValue9,keyValue10,keyValue11,keyValue12,keyValue13,keyValue15,keyValue15,keyValue16
01,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring
02,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring
03,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring
.
.
.
1000,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring,somestring
Kindly suggest any reference/example to do this.
I provide only generic instructions:
You can dynamically construct request body using JSR223 PreProcessor
You can read CSV file into memory using File.readLines() function
You can build JSON out of the values from the CSV file using JsonBuilder class
More information:
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It
iam using 4GL in Progress OpenEdge 11.3 and i want to write a xml file from xsd schema file.
Can i generate a xml file from a XML Schema (xsd) with 4GL Progress OpenEdge?
thanks.
Well, you can use a method called READ-XMLSCHEMA (and it's counterpart WRITE-XMLSCHEMA).
These can be applied to both TEMP-TABLES and ProDataSets (depending of the complexity of the xml).
The ProDataSet documentation, found here, contains quite a lot information about this. There's also a book called Working with XML that can help you.
This is the basic syntax of READ-XMLSCHEMA (when working with datasets):
READ-XMLSCHEMA ( source-type, { file | memptr | handle | longchar },
override-default-mapping [, field-type-mapping [, verify-schema-mode ] ] ).
A basic example would be:
DATASET ds:READ-XMLSCHEMA("file", "c:\temp\file.xsd", FALSE).
However since you need to work with the actual XML you also will have to handle data. That data is handled in the TEMP-TABLES contained withing the Dataset. It might be easier to start with creating a static ProDataSet that corresponds to the schema and then handle it's data whatever way you want.
I new to pandas and trying to learn how to work with it. Im having a problem when trying to use an example I saw in one of wes videos and notebooks on my data. I have a csv file that looks like this:
filePath,vp,score
E:\Audio\7168965711_5601_4.wav,Cust_9709495726,-2
E:\Audio\7168965711_5601_4.wav,Cust_9708568031,-80
E:\Audio\7168965711_5601_4.wav,Cust_9702445777,-2
E:\Audio\7168965711_5601_4.wav,Cust_7023544759,-35
E:\Audio\7168965711_5601_4.wav,Cust_9702229339,-77
E:\Audio\7168965711_5601_4.wav,Cust_9513243289,25
E:\Audio\7168965711_5601_4.wav,Cust_2102513187,18
E:\Audio\7168965711_5601_4.wav,Cust_6625625104,-56
E:\Audio\7168965711_5601_4.wav,Cust_6073165338,-40
E:\Audio\7168965711_5601_4.wav,Cust_5105831247,-30
E:\Audio\7168965711_5601_4.wav,Cust_9513082770,-55
E:\Audio\7168965711_5601_4.wav,Cust_5753907026,-79
E:\Audio\7168965711_5601_4.wav,Cust_7403410322,11
E:\Audio\7168965711_5601_4.wav,Cust_4062144116,-70
I loading it to a data frame and the group it by "filePath" and "vp", the code is:
res = df.groupby(['filePath','vp']).size()
res.index
and the output is:
[E:\Audio\7168965711_5601_4.wav Cust_2102513187,
Cust_4062144116, Cust_5105831247,
Cust_5753907026, Cust_6073165338,
Cust_6625625104, Cust_7023544759,
Cust_7403410322, Cust_9513082770,
Cust_9513243289, Cust_9702229339,
Cust_9702445777, Cust_9708568031,
Cust_9709495726]
Now Im trying to approach the index like a dict, as i saw in examples, but when im doing
res['Cust_4062144116']
I get an error:
KeyError: 'Cust_4062144116'
I do succeed to get a result when im putting the filepath, but as i understand and saw in previouse examples i should be able to use the vp keys as well, isnt is so?
Sorry if its a trivial one, i just cant understand why it is working in one example but not in the other.
Rutger you are not correct. It is possible to "partial" index a multiIndex series. I simply did it the wrong way.
The index first level is the file name (e.g. E:\Audio\7168965711_5601_4.wav above) and the second level is vp. Meaning, for each file name i have multiple vps.
Now, this is correct:
res['E:\Audio\7168965711_5601_4.wav]
and will return:
Cust_2102513187 2
Cust_4062144116 8
....
but trying to index by the inner index (the Cust_ indexes) will fail.
You groupby two columns and therefore get a MultiIndex in return. This means you also have to slice using those to columns, not with a single index value.
Your .size() on the groupby object converts it into a Series. If you force it in a DataFrame you can use the .xs method to slice a single level:
res = pd.DataFrame(df.groupby(['filePath','vp']).size())
res.xs('Cust_4062144116', level=1)
That works. If you want to keep it as a series, boolean indexing can help, something like:
res[res.index.get_level_values(1) == 'Cust_4062144116']
The last option is a bit less readable, but sometimes also more flexibile, you could test for multiple values at once for example:
res[res.index.get_level_values(1).isin(['Cust_4062144116', 'Cust_6073165338'])]
I have the following Problem, given this XML Datastructure:
<level1>
<level2ElementTypeA></level2ElementTypeA>
<level2ElementTypeB>
<level3ElementTypeA>String1Ineed<level3ElementTypeB>
</level2ElementTypeB>
...
<level2ElementTypeC>
<level3ElementTypeB attribute1>
<level4ElementTypeA>String2Ineed<level4ElementTypeA>
<level3ElementTypeB>
<level2ElementTypeC>
...
<level2ElementTypeD></level2ElementTypeD>
</level1>
<level1>...</level1>
I need to create an Entity which contain: String1Ineed and String2Ineed.
So every time I came across a level3ElementTypeB with a certain value in attribute1, I have my String2Ineed. The ugly part is how to obtain String1Ineed, which is located in the first element of type level2ElementTypeB above the current level2ElementTypeC.
My 'imperative' solution looks like that that I always keep an variable with the last value of String1Ineed and if I hit criteria for String2Ineed, I simply use that. If we look at this from a plain collection processing point of view. How would you model the backtracking logic between String1Ineed and String2Ineed? Using the State Monad?
Isn't this what XPATH is for? You can find String2Ineed and then change the axis to search back for String1Ineed.