how to explicitly order tasks in the ui on concourse? - concourse

I have a bunch of examples that are ordered by number. I'd like to be able to keep the order of the tasks as defined in the concourse pipeline.yml but on the gui, it gets reordered by the ui.
Is there anyway to explicitly define an order for the tasks on the ui? ie. ex01, ex02 ... ex07 in that order.

maybe you found out for yourself?
The order in the user interface (Concourse 6.2.0 tested) depends on the string length of the task name.
Short task names come first - longer tasks name names below.
If the task names have all the same length they will be ordered alphabetically.
Have a look at the screenshots:
task names filled up with underscores
different task name lengths

Related

How do I allocate fix number of users per single user class in locust

Suppose I have 3 separate user classes. I want to allocate fix number of users for each class. My code is as below.
class User_1(TaskSet):
# I need 3 users to execute the tasks within this user class
class User_2(TaskSet):
# I need only 1 user to execute the tasks within this user class
class User_3(TaskSet):
# I need only 1 user to execute the tasks within this user class
class API_User_Test(HttpUser):
#I already tried weighting the classes as below.
tasks = {Site_User_1: 3, User_2: 1, User_3: 1}
I've already tried weighting the classes as shown in the code above. But it doesn't work. Some times it will allocate more than 1 users for class User_2 or class User_3. Can someone tell me how to fix this issue.
A weight in Locust is just a statistical weight and is not guarantee. The weights determine how many times a task/user are put into a list to be selected from. When a new task/user is spawned, Locust randomly selects a task from the list. Given your weights:
tasks = {Site_User_1: 3, User_2: 1, User_3: 1}
Statistically speaking, spawning 5 users with weights 3/1/1 would get you 3/1/1 but it may not be that precise every time. While less likely, it's possible you could get 4/0/1 or 3/2/0 or 5/0/0.
From the Locust docs:
If the tasks attribute is specified as a list, each time a task is to be performed, it will be randomly chosen from the tasks attribute. If however, tasks is a dict - with callables as keys and ints as values - the task that is to be executed will be chosen at random but with the int as ratio. So with a task that looks like this:
{my_task: 3, another_task: 1}
my_task would be 3 times more likely to be executed than another_task.
Internally the above dict will actually be expanded into a list (and the tasks attribute is updated) that looks like this:
[my_task, my_task, my_task, another_task]
and then Python’s random.choice() is used pick tasks from the list.
If you absolutely have to have full control over exactly what users are running, I'd probably recommend having a single Locust user with a single task that contains your own logic on what to run. Create your own list of functions to call and iterate through it each time a new user is created. Might have to be external to the user as a global or something. But the idea is you manage the logic yourself and not Locust.
Edit:
Using the single user method to control what's running won't work well if you run on multiple workers as the workers don't communicate with each other. You may consider doing some more advanced things like sending messages between master and workers to coordinate, or use an external source like a database or other service the workers talk to to know what they should run.

Parameter Variation: Tracking the Metadata

I am trying to use parameter variation in AnyLogic. My inputs are 3 parameters, each varying 5 times. My output is water demand. What I need from parameter variation is the way in which demand changes according to the different combinations of the three parameters. I imagine something like: there are 10,950 rows (one for each day), the first column is time (in days), the second column are the values for the first combination, the second column is the second combination, and so on and so forth. What would be the best way to track this metadata to then be able to export it to excel? I have added a "dataset" to my main to track demand through each simulation, but I am not sure what to add to the parameter variation experiment interface to track the output across the different iterations. It would also be helpful to have a way to know which combination of inputs produced a given output (for example, have the combination be the name for each column). I see that there are Java Actions, but I haven't been able to figure out the code to do what I need. I appreciate any help with this matter.
The easiest approach is just to track this in output database tables which are then exported to Excel at the end of your run. As long as these tables include outputs from multiple runs (and are, for example, only cleared at the start of the experiment not the run), your Parameter Variation experiment will end up with an Excel file having outcomes from all the runs. (You will probably need to turn off parallel execution in the PV experiment so you don't run into issues trying to write to the same Excel file in parallel.)
So, for example, you might have tables:
run_details with columns id, parm1, parm2 and parm3 (with proper column names given your actual parameters and some unique ID generated for each run)
output_demand with columns run_id, sim_time_hrs and demand_value (if, say, you're storing some demand value each hour of simulated time) where run_id cross-references the run's ID in run_details
(There is extra complexity in how you could allocate a unique run ID and how and when you write to/clear those tables, but I'm just presenting the core design. You can also get round the need-serial-execution point by programmatically controlling when you export to Excel, rather than using the built-in "Export tables at the end of model execution" capability, but that's also more complicated.)

Azure DevOps classic pipeline difference between linked parameters and variables?

What is the difference between linked task parameters (process parameters) and variables in classic Azure DevOps build pipeline? Don't they all allow having a single place where to change values?
What I mean by "linked" task parameters are what you get by clicking the link icon when configuring tasks like below
which leads to adding a textbox for the linked value in settings page for the pipeline as you see below
Regarding parameters in the classic pipeline, we generally use Process parameters. You can link all important arguments for tasks used across the build definition as process parameters, which are then shown at one place-the Pipeline view. This means you can quickly edit these arguments without needing to click through all the tasks. Templates come with a set of predefined process parameters.
Variables give you a convenient way to get key bits of data into various parts of the pipeline. The most common use of variables is to define a value that you can then use in your pipeline. All variables are stored as strings and are mutable. The value of a variable can change from run to run or job to job of your pipeline.
The difference between them is:
Variables can be a convenient way to collect information from the
user up front. You can also use variables to pass data from step to
step within a pipeline.Unlike variables, pipeline parameters can't be
changed by a pipeline while it's running.
Parameters have data types such as number and string, and they can be
restricted to a subset of values. Restricting the parameters is
useful when a user-configurable part of the pipeline should take a
value only from a constrained list. The setup ensures that the
pipeline won't take arbitrary data.
Process parameters differ from variables in the kind of input supported by them. Variables only take in string inputs while process parameters in addition to string inputs support additional data types like check boxes and drop-down list boxes.
For detailed information, please refer to the following documents:
Define variables
Process parameters
Variables and parameters

Adding paramters to VSTS Task Group

I have a Task Group that I created out of a set of build tasks. I am able to edit the tasks quite well, but i now realise i will need to add another parameter to the task group. How do I go about doing that?
Task group parameters are automatically created based on the variables used in the tasks. If you reference a new variable in a task that's within a task group, it will pop up.
In addition to the accepted answer, if you want to add parameters that are not directly referenced by tasks within the tasks group (e.g. to use in a config file token replacement task) then you can export your task group, edit the .json file then import it back in. The parameters are in an inputs array near the end of file. You can also hide parameters here if you only want to use them internally to the task group by setting a default value and adding a 'visibleRule' property, see this article for details: https://medium.com/objectsharp/how-to-hide-a-task-group-parameter-b95f7c85870c
This will create a new task group rather than updating your current task group. If you want to update the task group, you can use this REST API:
https://learn.microsoft.com/en-us/rest/api/azure/devops/distributedtask/taskgroups/update?view=azure-devops-rest-5.1

In which order are workflow items processed?

I have a number of workflow items on cases in SuiteCRM.
How can I determine the order in which these items are processed? In my situation, I am setting the priority of the case based on the values of some integer fields. However, these integer fields must first be populated based on the values of some dropdowns.
How can I make sure they are populated in the correct order? I can't see an order of execution with the workflow items.
Workflow simply pulls the workflow items to run using get_full_list which will just give the items in whatever order the database returns them (probably by id).
The alternatives are to add a new hidden flag field to the case to signify that the values have been set then check this in the workflow conditions.
Allowing setting a priority for a workflow would be a good addition however and I've added this on the SuiteCRM GitHub: https://github.com/salesagility/SuiteCRM/issues/280