Calling Jmeter Functions from BeanShell Assertion Script - eclipse

I am trying to run the jmeter test-suites in eclipse.
In my test-suite I am using a BeanShellAssertion to count the number of rows in a csv file.
I have a custom jmeter function to do so.
The script of the BeanShellAssertion is :
String str = "${__CustomFunction("Path to the CSV file")}";
int i = Integer.parseInt(str);
if(i ==0)
{
Failure = true;
FailureMessage = "Failed!";
}
return i;
This test-suite works fine when I run it using the jmeter on my local machine.
Only when I try to run it with eclipse, (using the jmeter maven plugin) I see the following error:
jmeter.util.BeanShellInterpreter: Error invoking bsh method:
eval Sourced file: inline evaluation of: `` String str =
"${__CustomFunction("FilePath")}"; int i = Integ . . . '' : Typed
variable declaration : Method Invocation Integer.parseInt
I am wondering if there's some other way to invoke the jmeter functions when executing it using eclipse cause I am sure that the function is correct as I mentioned before that it works fine when the test suite is run using the jmeter on my local machine.
Any help would be appreciated.
Thanks.

Are you sure your custom function jar is visible for the Maven Plugin ?
As when you run it from JMeter, it works , I suppose you have a jar in lib/ext.
So you need to make this jar available to the jmeter maven plugin.

Related

Karate- Gatling: Not able to run scenarios based on tags

I am trying to run performance test on scenario tagged as perf from the below feature file-
#tag1 #tag2 #tag3
**background:**
user login
#tag4 #perf
**scenario1:**
#tag4
**scenario2:**
Below is my .scala file setup-
class PerfTest extends Simulation {
val protocol = karateProtocol()
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath"))
setUp(
getTags.inject(
atOnceUsers(1)
).protocols(protocol)
)
I have tried passing the tags from command line and as well as passing the tag as argument in exec method in scala setup.
Terminal command-
mvn clean test-compile gatling:test "-Dkarate.env={env}" "-Dkarate.options= --tags #perf"
.scala update:- I have also tried passing the tag as an argument in the karate execute.
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath", "#perf"))
Both scenarios are being executed with either approach. Any pointers how i can force only the test with tag perf to run?
I wanted to share the finding here. I realized it is working fine when i am passing the tag info in .scala file.
My scenario with perf tag was a combination of GET and POST call as i needed some data from GET call to pass in POST call. That's why i was seeing both calls when running performance test.
I did not find any reference in karate gatling documentation for passing tags in terminal execution command. So i am assuming that might not be a valid case.

SCP command not working in karate project - it throws command error:cannot run program scp.exe: CreateProcess error=2 [duplicate]

I'm trying to execute bash script using karate. I'm able to execute the script from karate-config.js and also from .feature file. I'm also able to pass the arguments to the script.
The problem is, that if the script fails (exits with something else than 0) the test execution continues and finishes as succesfull.
I found out that when the script echo-es something then i can access it as a result of the script so I could possibly echo the exit value and do assertion on it (in some re-usable feature), but this seems like a workaround rather than a valid clean solution. Is there some clean way of accessing the exit code without echo-ing it? Am I missing on something?
script
#!/bin/bash
#possible solution
#echo 3
exit 3;
karate-config.js
var result = karate.exec('script.sh arg1')
feture file
def result = karate.exec('script.sh arg1')
Great timing. We very recently did some work for CLI testing which I am sure you can use effectively. Here is a thread on Twitter: https://twitter.com/maxandersen/status/1276431309276151814
And we have just released version 0.9.6.RC4 and new we have a new karate.fork() option that returns an instance of Command on which you can call exitCode
Here's an example:
* def proc = karate.fork('script.sh arg1')
* proc.waitSync()
* match proc.exitCode == 0
You can get more ideas here: https://github.com/intuit/karate/issues/1191#issuecomment-650087023
Note that the argument to karate.fork() can take multiple forms. If you are using karate.exec() (which will block until the process completes) the same arguments work.
string - full command line as seen above
string array - e.g. ['script.sh', 'arg1']
json where the keys can be
line - string (OR)
args - string array
env - optional environment properties (as JSON)
redirectErrorStream - boolean, true by default which means Sys.err appears in Sys.out
workingDir - working directory
useShell - default false, auto-prepend cmd /c or sh -c depending on OS
And since karate.fork() is async, you need to call waitSync() if needed as in the example above.
Do provide feedback and we can tweak further if needed.
EDIT: here's a very advanced example that shows how to listen to the process output / log, collect the log, and conditionally exit: fork-listener.feature
Another answer which can be a useful reference: Conditional match based on OS
And here's how to use cURL for advanced HTTP tests ! https://stackoverflow.com/a/73230200/143475
In case you need to do a lot of local file manipulation, you can use the karate.toJavaFile() utility so you can convert a relative path or a "prefixed" path to an absolute path.
* def file = karate.toJavaFile('classpath:some/file.txt')
* def path = file.getPath()

How to run a sub-set of TestCases using --where for nunit

For my project I want to run the exact same test cases twice, once locally and on a different VM in parallel in the cloud (Azure in my case).
I duplicated the TestCase and tagged one Category("Local") and the other Category("Cloud").
Running nunit3 from the console with --where="cat == Cloud" will thus run all TestCases of every test that has one or more TestCases tagged with Category("Cloud").
Is there a different way of only running selected TestCases by a commandline switch?
Simplified example:
[TestCase(TestName = "Canary, Run in cloud."), Category("Cloud")]
[TestCase(TestName = "Canary, Run locally."), Category("Local")]
public void Canary()
{
Assert.True(true);
}
Found a work-around.
Using --params:Cloud=true as command line argument and in the code
private bool ShallRunInCloud => TestContext.Parameters["Cloud"]?.ToLowerInvariant() == "true";

Running groovy scripts with matlab

I wrote a groovy script that I need to execute on matlab. I added the groovy-all.jar file to matlab's JavaClassPath, and I'm able to run a few commands, such as adding jars to groovy ClassPath by creating a groovy console object.
javaaddpath('C:\Users\rx49\Desktop\DoseWatch\QC_Project\Script_QA_images\groovy-all-2.4.7.jar');
javaaddpath('C:\Program Files\Java\jre1.8.0_91\lib\rt.jar');
console=groovy.ui.Console();
pth='C:\Users\rx49\Desktop\DoseWatch\QC_Project\Script_QA_images\file.groovy';
script = javaObject('java.io.File', pth)
console.loadScriptFile(script);
THe console.loadScriptFile function only take as argument a java.io.File object. So I created one through the matlab javaObject function. When I execute the code below, matlab sends me the following error :
??? Java exception occurred:
java.lang.NullPointerException: Cannot invoke method edt() on null object
at org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:91)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.NullCallSite.call(NullCallSite.java:35)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:57)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at groovy.ui.Console.loadScriptFile(Console.groovy:649)
Error in ==> run_script at 7
console.loadScriptFile(script);
I have no idea if this will work, I don't have Matlab, or your script, but it feels like you should be able to do:
javaaddpath('C:\Users\rx49\Desktop\DoseWatch\QC_Project\Script_QA_images\groovy-all-2.4.7.jar');
pth='C:\Users\rx49\Desktop\DoseWatch\QC_Project\Script_QA_images\file.groovy';
shell = groovy.lang.GroovyShell();
matrix = shell.run(javaObject('java.io.File', pth));

How to execute JMeter test case from Java code

How do I run a JMeter test case from Java code?
I have followed the example Here from Blazemeter.com
My code is as follows:
public class BasicSampler {
public static void main(String[] argv) throws Exception {
// JMeter Engine
StandardJMeterEngine jmeter = new StandardJMeterEngine();
// Initialize Properties, logging, locale, etc.
JMeterUtils.loadJMeterProperties("/home/stone/Workbench/automated-testing/apache-jmeter-2.11/bin/jmeter.properties");
JMeterUtils.setJMeterHome("/home/stone/Workbench/automated-testing/apache-jmeter-2.11");
JMeterUtils.initLogging();// you can comment this line out to see extra log messages of i.e. DEBUG level
JMeterUtils.initLocale();
// Initialize JMeter SaveService
SaveService.loadProperties();
// Load existing .jmx Test Plan
FileInputStream in = new FileInputStream("/home/stone/Workbench/automated-testing/apache-jmeter-2.11/bin/examples/CSVSample.jmx");
HashTree testPlanTree = SaveService.loadTree(in);
in.close();
// Run JMeter Test
jmeter.configure(testPlanTree);
jmeter.run();
}
}
but I keep getting the following messages in the console and my test never executes.
INFO 2014-09-23 12:04:40.492 [jmeter.e] (): Listeners will be started after enabling running version
INFO 2014-09-23 12:04:40.511 [jmeter.e] (): To revert to the earlier behaviour, define jmeterengine.startlistenerslater=false
I have also tried uncommented jmeterengine.startlistenerslater=false from jmeter.properties file
How do you know that your "test never executes"?
What is in jmeter.log file (it should be in the root of your project). Or alternatively comment JMeterUtils.initLogging() line to see the full output in STDOUT
Have you changed relative path CSVSample_user.csv in "Get user details" CSV Data Set Config as it may resolve into a different location as it recommended in Using CSV DATA SET CONFIG
Is CSVSample.jtl file generated anywhere (again it should be in the root of your project by default)? What is in it?
The code looks good and I'm pretty sure that the problem is with the path to CSVSample_user.csv file and you have something like java.io.FileNotFoundException in your log. Please double check that CSVSample.jmx file contains valid full path to CSVSample_user.csv.
UPDATE TO ANSWER QUESTIONS IN COMMENTS
jmeter.log file should be under your Eclipse workspace folder by default
Looking into CSVSample.jmx there is a View Resulst in Table listener which is configured to store results under ~/CSVSample.jtl
If you want to see summarizer messages and "classic" .jtl reporting add next few lines before jmeter.configure(testPlanTree); stanza
Summariser summer = null;
String summariserName = JMeterUtils.getPropDefault("summariser.name", "summary");
if (summariserName.length() > 0) {
summer = new Summariser(summariserName);
}
String logFile = "/path/to/jtl/results/file.jtl";
ResultCollector logger = new ResultCollector(summer);
logger.setFilename(logFile);
testPlanTree.add(testPlanTree.getArray()[0], logger);
Try using library - https://github.com/abstracta/jmeter-java-dsl.
It supports implementing JMeter test as java code.
Below example shows how to implement and execute test for REST API. Same approach could be applied to other type of tests as well.
#Test
public void testPerformance() throws IOException {
TestPlanStats stats = testPlan(
threadGroup(2, 10,
httpSampler("http://my.service")
.post("{\"name\": \"test\"}", Type.APPLICATION_JSON)
),
//this is just to log details of each request stats
jtlWriter("test" + Instant.now().toString().replace(":", "-") + ".jtl")
).run();
assertThat(stats.overall().elapsedTimePercentile99()).isLessThan(Duration.ofSeconds(5));
}