Use CSV values in JMeter as request path - rest

I have one of jmeter User defined variable as a "comma separated value" - ${countries} = IN,US,CA,ALL .
(I was first trying to get it as a list/array - [IN,US,CA,ALL] )
I want to use the variable to test a web service - GET /${country}/info . IS it possible using ForEach controller or Loop controller ?
Only thing is that I want to save it or read it as IN,US,..,ALL and use it in the request path.
Thanks

The CSV should be as per the format mentioned in the image attached.
Refer to the link on how to use CSV in Jmeter: http://ivetetecedor.com/how-to-use-a-csv-file-with-jmeter/
Thread Group Settings
No. of threads: 1
Ramp-up period: 1
Loop Count: 4
Hope this will help.

CSV config is a red herring, you don't need it.
You can use a regular expression extractor to split up the variable into another variable (eg MyVar), using something like:
(.+?)[,\n]
This is trying to match each item before a , or newline. It will place the values in variables like MyVar_1, MyVar_2, etc. This is as close to an array as JMeter understands natively.
You can then loop on the contents of the matches using MyVar_matchNr, and MyVar_1 to MyVar_n (you will need to use __V() function to access the 'array' contents.

Related

What is the equivalent to Kusto's CountOf() function in Azure Data Factory?

My requirement is to extract a string from filenames using a ADF variable, I need to extract the string until the final underscore '_' and the number of underscores vary in every filename as seen in the below example.
abc_xyz_20221221.txt --> abc_xyz
abc_xyz_a1_20221221.txt --> abc_xyz_a1
abc_c_ab_a1_20221221.txt --> abc_c_ab_a1
abc_c_ab_a1_a11_20221221.txt --> abc_c_ab_a1_a11
I tried to get it done using indexof() to get the position of the final underscore but it does not accept negative values, so I got the below logic which works in KQL (Azure Data Explorer) but fails in ADF because there is no CountOf() in this tool. Is there any equivalent function in ADF or can you please suggest me how to achieve the same in ADF?
substring("abc_xyz_20221221.txt", 0,
indexof("abc_xyz_20221221.txt", "_", 0,
strlen("abc_xyz_20221221.txt"),
countof("abc_xyz_20221221.txt", '_')))
You can try like this also using split and join inside ForEach activity.
Array for ForEach activity:
["abc_xyz_20221221.txt","abc_xyz_a1_20221221.txt","abc_c_ab_a1_20221221.txt","abc_c_ab_a1_a11_20221221.txt"]
Append variable inside ForEach:
#join(take(split(item(), '_'),add(length(split(item(), '_')),-1)),'_')
Result in an array variable:
As mentioned by #Joel Cochran, use the below expression in the append variable inside ForEach with lastIndexOf().
#substring(item(),0,lastindexof(item(),'_'))
This is a just a simpler form of what #Rakesh called out above . The only difference being , his implementation is iterating . In my case the file name is stored in a variable named foo
#substring(variables('foo'),0,lastindexof(variables('foo'),'_'))
output

How do I prevent users to use thousands separator in FileMaker Pro?

In FileMaker Pro, when using number field, the user can choose to use a thousand separator or not. For example, if I have a database with a field for the price of an item, the user can either enter 1,000 or 1000.
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000. Therefore, I want to either automatically remove the comma, or (my preference in this case) alert the user when trying to enter a value with a thousand separator.
What I tried is the following.
For the field, I am setting Validation options. For example:
Require Strict data type: Numeric Only
Validated by calculation: Position ( Self ; ","; 1 ; 1 ) = 0
Validated by calculation: Self = Substitue ( Self, ",", "")
Auto-enter calculation: Filter( Self ; "0123456789." )
Unfortunately, none of these work. As the field is defined as a number (and I want to keep it like this, as I am also performing calculations based on this number), the Position function and the Substitute function apparently ignore the thousand separator!
EDIT:
Note that I am generating my XML by concatenating a string, for example:
"<Products><Product><Name>" & Name & "</Name><Price>" & Price & "</Price></Product></Product>"
The reason is that what I am exporting is dependent on the values in my database. Therefore, I am not using the [File][Export records...] function.
Auto-enter calculation will work, but you need to uncheck the box "Do not replace existing value of field" (which is checked by default).
I'd suggest using the calculation GetAsNumber(self) as the auto-enter calc. If it should only contain integers, wrap that in a call to Int()
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000.
If this is only a problem when you export, why not handle it when exporting?
If you are exporting as XML using XSLT, you can add an instruction to
your stylesheet to remove the comma from all number fields;
Alternatively, you can export from a layout where the field is
formatted to display without the comma and select the Apply current's layout data formatting to exported data option when
exporting.
Added:
Perhaps I should have clarified. I am not using the export function to generate the XML as there is some logic involved in how the XML should be formatted (dependent on the data that I want to export). What I do instead is that I make a string where I combine XML-tags and actual values from the database.
IMHO, you're making a mistake by not taking advantage of the built-in XML/XSLT export option. Any imaginable logic can be implemented this way, without burdening your solution with the fragile task of creating a valid XML.
In any case, if you're using the field in a calculation, you can replace all references to it with:
GetAsNumber (YourField )
to get an unformatted, numeric-only, value.
Your question puzzles me. As far as I know, FileMaker does not store the thousands separator, but rather offers it only as a display option.
That's also why those functions can't find it.
Are you sure you are exporting the raw data and not a "formatted as layout" variant?

How can I pass a variable from one feature file to other

I have a variable in one of the scenario of a feature file which I need to use in the request body of second feature file.
For Example:
A.feature
Scenario: Test
Given url 'abc'
* def number = 12345
And request {tyu:'#(number)',dhd:'lkj'}
When method put
Then status 200
B.feature
Scenario: Test2
Given url 'pqr'
And request {tyu:'#(number)'}
When method put
Then status 200
Note: Number variable in A.feature is a 6 digit number which is randomly generated everytime and the same should be passed in B.feature file.
Normally if you have two Scenario-s that depend on one another you have to combine them into one. Refer the docs here: https://github.com/intuit/karate#script-structure
But if you are really looking for how to initialize something and re-use it across all feature files, maybe you are looking for karate.callSingle(): https://github.com/intuit/karate#hooks
var result = karate.callSingle('get-token.feature');

Talend How To Pass Last Modified File Into TFileInputDelimited?

I have searched all over, and read this post.
But it doesn't seem complete and doesn't work.
The situation: I need to get the last modified file from a directory on the local machine. I then need to pass that file into the fileinputdelimited component.
I currently have:
tfilelist --> iterate --> titeratetoflow --> tsamplerow
-->tflowtoiterate -> tinpufiledelimited ---> tlogrow (just to make sure its pulling the right file)
But it doesn't work. I have configured it. so that titeratetoflow has a column called
"FileName" with "((String)globalMap.get("CURRENT_FILE"))" as the value,
"FileDirectory" with ((String)globalMap.get("CURRENT_FILEDIRECTORY")) as value, and
"FileAndDirectory" with ((String)globalMap.get("CURRENT_FILEPATH")) as value.
The tsamplerow is limited to "1".
The tiflowtoiterate is set so that
"FileNameOnly" is value of "FileName"
"FileDirectoryOnly" is "FileDirectory" and
"FilePathComplete" is "FileAndDirectory"
In the File location field of the tinputfiledelimited, I have "((String)globalMap.get("FilePathComplete"))"
When it runs I get an error saying cannot find file or path. If I cut out the fileinput component and have it send straight to the tlogrow, it shows a single line of blank entry.
Any ideas?
I'm not sure if you've just slightly misconfigured the job here but it seems to work fine for me.
Here's a few screenshots showing my job design:
The only thing I can think of just by looking at your post is that you might have slightly messed up the key value pair combinations in the tFlowToIterate. I tend to find that the default settings there work fine pretty much all of the time and it makes it a little more obvious what it's doing as well.
EDIT: Actually, it looks like you might be using the wrong values in your tIterateToFlow. The tFileList will throw the values for the file paths etc in to the global map but it will preface it with the unique component name. If you hit ctrl+space in the value window it should prompt you with a list of available values (these are also specified in the "Outline" tab of the studio). It typically makes an implicit conversion to String but for this you will need to explicitly convert it so use .toString() instead of (String).
Another way to get last modified file is as below
tFileList(sorted DESC by file modified date) ------> tFixedFlowInput (schema - filename, filenumber) ----->tHashOutput
here in tFixedFlowInput
filename = file(String)globalMap.get("tFileList_1_CURRENT_FILEPATH")+"/"+(String)globalMap.get("tFileList_1_CURRENT_FILE")
filenumber = (Integer)globalMap.get("tFileList_1_NB_FILE")
What above will accomplish is get list of all files in the directory with their number/rank - where the file last modified will have file number =1 and next to that will have 2...and so on.
Now on SubJobOK of above tFileList you can have tHashInput which will read from above tHashOutput and filter only row where filenumber==1 - which means the last modified file.
tHashInput (link to tHashoutput) ---->tFilterRow(filenumber==1)------>tLogRow
One reason why you are getting null is probably you have used globalMap.get("CURRENT_FILEPATH) instead of globalMap.get("tFileList_1_CURRENT_FILEPATH")
The Simple Solution for above problem could be as below:
tFileList(sorted ASC by file modified date)--> tIterateToFlow --> tJava( just to end the subjob).
Then on
subjob ok --> tfileinput ( use (String)globalMap.get("tFileList_1_CURRENT_FILE") or (String)globalMap.get("tFileList_1_CURRENT_FILEPATH") as a file name/file path)
Explanation:
Since tFileList iterates all the files in ASC order, it will always have Latest file name stored in globalMap for the last iteration. The list is only iterated till tIterateToFlow hence after this component (String)globalMap.get("tFileList_1_CURRENT_FILE") will always give the last file name from the iterated list, which is the latest file in out case.
Main Flow :
Component View:

How to load two sheets at once with xlsread

I currently have this code to load a set of prompts to assign the appropriate data:
full=xlsread(input('File Name for Full data?\n'),input('Sheet Name for full?\n'));
empty=xlsread(input('File Name for Empty data?\n'),input('Sheet Name for empty?\n'));
xx1=full(:,1);
yy1=full(:,2);
ff1=full(:,3);
xx2=empty(:,1);
yy2=empty(:,2);
ff2=empty(:,3);
However, since the full and empty sheets are both in one spreadsheet, I would like to make it so that there is only one prompt for the file and then a prompt for each sheet, so something like:
everything=xlsread(input('File Name for Full data?\n'),input('Sheet Name for full?\n'),input('Sheet Name for empty?\n');
xx1=everything(:,1);
yy1=everything(:,2);
ff1=everything(:,3);
xx2=everything(:,4);
yy2=everything(:,5);
ff2=everything(:,6);
What can I do to make this work out?
Just make the input calls before you use xlsread
filename = input('File Name for Full data?\n')
full = input('Sheet Name for full?\n')
empty = input('Sheet Name for empty?\n')
full=xlsread(filename, full);
empty=xlsread(filename, empty);
xx1=full(:,1);
yy1=full(:,2);
ff1=full(:,3);
xx2=empty(:,1);
yy2=empty(:,2);
ff2=empty(:,3);
Though xlsread does not support this directly, you can create a wrapper that will call xlsread in the right way.
Basically just ask the input arguments that you need, and based on them make a call to xlsread.
It is indeed a weakness that you cannot read multiple sheets simultaneously, but xlsread is just a very basic command. Personally I think it is a greater weakness that you can only read out contiguous ranges.