Although I am able to run any story n number of times using JBehave, I want to execute a step n number of times. Is it possible?
It should be possible. Try configuring your runner class to create your n steps instances and run n story files associated.
Related
When I create a ParametersVariation simulation, the main model does not run. All I see is the default UI with iterations completed and replication. My end goal (as with most people) is to have a model go through a certain number of replications, but nothing is even running. There is limited documentation available on this. Please advise.
This is how Parameters Variation is intended to work. If you're running 1000 runs and multiple replications with parallel runs, how can you see what's happening in Main in each?
Typically, the best way to benefit from such an experiment is to track the results of each run using elements from the Analysis palette or even better to export results to Excel or similar.
To be able to collect data, you need to write your code in Java actions fields with root. to access elements in main (or top-level agent).
Check the example below, where after each run a variable from main is added to a dataset in the Parameters Variation experiment. At the end of 100 runs for example, the dataset will have 100 values of the main variable, with 1 value for each run.
I need to create a methode (or similar) which reads the runs of two agents and combinate them into one run.
Each agent has data base which contains multible parameters and each paramter has multible values, a value for each parameter for each run.
so how can I let the project run all these different alternatives and get me the different outputs for the combinations?
Create a param-variation experiment and have 1 boolean parameter on Main "runFirstAgent". Setup your model to only load the first agent if runFirstAgent== true, otherwise let it run the second agent.
In the Param-variation experiment, setup the runs so it varies runFirstAgent accordingly.
Then, you can accumulate results from your runs in the experiment itself.
There are lots of example models that show you how to do it, check those out first ;-)
Good afternoon to everyone,
I have a problem with the software AnyLogic. I have to do a program that is able to combine two agents (two different semifinished) to create one final agent (final product). The problem is that the two semifinished have different production time, so I need a function that is able to accept one agent (the first semifinished), than to wait for the second agent and at the end to generate one final agent from the previous two agents (semifinished). How it's possible to do this? I have already tried with the function "Combine" without any success.
You need to use the "Assember" object from the process library. Speciy how many agents of type A and B you need (1 each in your case). The assember will create a new agent type (you need to specify it) from the 2 incoming agents once it has got 1 of each.
Also check the help on the Assembler, you can do lots of fine-tuning with it.
cheers
I need to execute a sequence of steps a specific number of times.. any pointers on what is the best way to do this in Spring Batch. I am able to implement executing a single step 'x' times. but my requirement is to execute a set of steps - based on a condition 'x' times.Any pointers will help.
Thanks
Lakshmi
You could put all steps in a job an start the whole job several times. There are different ways, how a job actually is launched in spring-batch. have a look at joboperator and launcher and then simply implement a loop around the launching of the job.
You can do this after the whole spring-context is initialized, so there will be no overhead concerning that. But you must by attention about the scope of your beans, especially the reader and writers.
Depending on your needs concerning failurehandling and restart, you also have pay attention how you manage the execution context of your job and steps.
You can simulate a loop with SB using a JobExecutionDecider:
Put it in front of all steps.
Store x in job execution context and check for x value into
decider: move to 'END' if x equals desidered value or increment it
and move to first step of set.
After last step move back to start (the decider).
Recently I got access to run my codes on a cluster. My code is totally paralleizable but I don't know how to best use its parallel nature. I've to compute elements of a big matrix and each of them are independent of the others. I want to submit the job to run on several machine (like 100) to speed up the computation of the matrix.
Right now, I wrote a script to submit multiple jobs each responsible to compute a part of the matrix and save it in a .mat file. At the end I'm merging them to get the whole matrix. For submitting each individual job, I've created a new .m file (run1.m, run.2, ...) to set a variable and then run the function to compute the associated part in the matrix. So basically run1.m is
id=1;compute_dists_matrix
and then compute_dists_matrix uses id to find the part it is going to compute. Then I wrote a script to create run1.m through run60.m and the qsub them to the cluster.
I wonder if there is a better way to do this using some MATLAB features for example. Because this seems to be a very typical task.
Yes, it works, but is not ideal, and as you say is a common problem. Matlab has a parallel programming toolkit.
Does your cluster have this? If so, the distributed arrays is worth having a look at. If they don't have access to it, then what you are doing is the only other way. You can wrap your run1.m,run2.m in a controlling script to automate it for you...
I believe you could use command line arguments for the id and submit jobs with a range of values for this id. Command line arguments can be processed by launching MATLAB from the command line without the IDE and providing the name of the script to be executed and the list of arguments. I would think you can set up dependencies in your job manager and create a "reduce" script to merge the partial results (from files). The whole process could be managed from a single script that would generate the id & other necessary arguments and submit the processing & postprocessing jobs with dependencies.