The right way to assign tasks - toloka

I am a bit worried about mixing task after the upload. Can I assign tasks to performers in the order they're uploaded without mixing them?
Another point is the possibility of using API to edit tasks, is it possible?

To make sure that task pages are assigned to performers in the order they're uploaded, when configuring your pool, set the key "mix_tasks_in_creation_order"
to true. In this case, the order of tasks within the page will be different. To prevent tasks from being shuffled within pages, set the value to
false in the key
"shuffle_tasks_in_task_suite"
You can only use the API to create a task, stop assigning it, or change overlap.

Related

Access to agents that grouped using Batch block

By Batch block I group together a certain amount of agents, They get a new agent identity.
I defined a temporary batch permanent batch = false.
How I can access to the agents that are 'inside' the new agent?
I tried to search in which population he keeps the original agents, without success.
you can access through agent.contents()
Beware though, that you need to use unbatch if you want to use those agents for anything at all... i would say it's good practice to take these batched agents as read_only otherwise unpredictable things might happen

How to provide dynamic values for approvals and checks in yaml pipelines?

I'm working on an integration between Azure Pipelines and ServiceNow's change management module. To achieve that the ServiceNow Change Management extension has been installed and configured according to this documentation page. In Azure DevOps we are using multistage yaml pipelines, which should create standard preapproved changes in ServiceNow.
The connection itself between the two applications works fine, I managed to put together a pipeline that creates change requests, waits until their status changes and then closes them. However, I'd like to pass some values set in the pipeline runs to the created change requests and I couldn't find a way to do it.
First I added a service connection to our Azure DevOps project, and created the ServiceNow check for it. I experimented a little with adding different expressions to it, like setting the short description to ${{ parameters.shortDescription }}, or defining a variable in the pipeline as ShortDescription: ${{ parameters.shortDescription }} and using that variable in the check as $(ShortDescription) or $[ variables.ShortDescription ]. Unfortunately none of these expressions got resolved. I also realized it is possible to use the predefined variables, but the values I'd like to set are not possible to describe by predefined variables. For example, selecting an assignment group would be pretty straightforward from a parameter defined as a list, but impossible to select from predefined variables.
So as a next idea, I tried to link a variable group to the check and update the variables through logging commands. Even though the variables from the group got resolved, they only showed the values I set them through the UI as a static default value. The dynamic values set via the logging commands were not visible. I played around some time and verified that I can update the definition of the variable groups through Azure CLI or REST API, so I can add new variables or update existing ones. Thus I tried to add a new variable to the linked group during the pipeline run named as ShortDescription_$(Build.BuildId). Even though it got added properly, I could not use it within the check, because it required double variable resolution, like $(ShortDescription_$(Build.BuildId)) and this expression was not resolved, not even partly. It remained $(ShortDescription_$(Build.BuildId)).
Then I started thinking about using only one variable from the group with a static name (e.g. ShortDescription) for all pipeline runs. However, I feel it would create a race condition and could cause some inconsistencies.
So as a last resort, I tried to put together an extension with an Agent and a ServerGate task, which are capable of storing the values I want to pass to change request and reading the stored values in an agentless environment. The problem here is, that the second task is not visible as a check for service connections. It's there as a release pipeline gate and looks good there, but I can't utilize it that way. Based on a question I found, this does not seem to be the problem with my task. To verify it, I copied the content of the same ServiceNow check I used before, and added it to my extension as a contribution with a different task id. And it did not show up as the question stated.
Which means now I can either
create a change request through my custom server task (as the ServerGate task can be used properly in yaml if it is changed to a Server task), but that way I can't wait for the state change of the ServiceNow ticket, or
create the change request in a separate stage where I want to use it, update it first in the same stage where I created it via the first-party check and wait for the state change in the stage where I would normally create it.
The second can work, but it has its own problems, like having misleading values stored in the changed request for the stage id field, or not having multiple change requests created for multiple run attempts of the deployment stage. Also I feel like it's not how the extension's task and check should be used.
Unfortunately, I'm out of ideas how this dynamic value passing can be achieved, if it's possible to do so in the first place. Could you please help me by sharing ideas, or pointing out errors in my attempts?

Which are the default attributes of an agent in Anylogic? What is the proper way to copy an agent?

I am trying to figure out which are the main default attributes of agents in Anylogic, e.g., the Id, the position and the index. So far I haven't found them in the help or at stackoverflow.
1) Do you know where this can be found or can you summarize the main you know? For instance the id used as unique identifier or the index used as the position inside of the population.
2) Are there any attributes regarding the agent history? For example time stamps as the creation time or the blocks it has passed through?
3) Is it possible to change the default id attribute of an agent? Can two agents have the same id?
4) As the split block doesn't copy any of the parameters or variables values to the copy, what is the proper way to copy an agent? I noticed in other publication Benjamin mentioned using agent.set_MyParam(original.MyParam). What would be "MyParam" in this code? Would this copy the value of parameters, variables and the current state in the statechart? Is it possible to make a copy and initialize its current state in the statechart as the original agent's current state?
Thank you for your help.
There are many things that are generated when you generate an agent... the best you can do is to check the api for the agent here: agent api
You can only see these things on the log if you activate it, but not from the API
the id is a unique identifier and i've never been in a situation where i need to change it,but if you want to change it, you can use the setId method, in which case, 2 or more agents may have the same id.
you can only use set_MyParam if the MyParam is a parameter, you can't do the same with variables. Nevertheless, if you want to copy an agent, you need to do it variable by variable, state by state, everything from scratch. There's no magical way to copy the exact same agent with all its current values and states and connections etc

Is it possible to 'update' ADF pipeline property through powershell

I would like to update some of the parameters of an ADF pipeline (e.g. concurrency level) of lots of mappings. I am not able to find out any cmdlet to be able to do this through powershell. I know I can drop existing pipeline and create new one, but that will start reprocessing all the Ready slices for that pipelines active period, which I don't want. Because in that case it will involve calculating up to what point existing pipeline has processed slices. And then this is only temporary, at some stage again I am going to revert back settings. I just want pipelines to change one of its properties. Doing this manually through the UI is slow and tedious. I am guessing there is no way around this, but let me know if you know of.
You can still use "New-AzureRmDataFactoryPipeline" for this Update scenario:
https://msdn.microsoft.com/en-us/library/mt619358.aspx
Use with the -Force parameter to force it to proceed even if the message reads "... may overwrite the existing resource".
Under the hood, it's the same HTTP PUT api call used by Azure UX Portal. You can verify that with Fiddler.
The already executed slices won't be re-run unless you set their status back to PendingExecution.
This rule applies to LinkedService and Dataset as well but NOT the top level DataFactory resource. A New-AzureRmDataFactory will cause the service to delete the existing DF along with all its sub-resources and create a brand new one. So be careful from there.

How to make a Sequential Http get calls from locust

In Locust Load test Enviroment tasks are defined and are called randomly.
But if i want a task to be performed just after a specific task. Then how do i do it?
for ex: after every 'X' url call i want 'Y' url to be called based on the response of 'X'.
In my experience, I found that it's better to model Locust tasks as completely independent of each other, and each of them covering a user scenario or behavior (eg. customer logs in, searches for a book and adds it to the cart). This is mostly because that's a closer simulation of the user's behavior.
Have you tried just having the multiple requests on the same task, and just if / else based on your responses? This slide from Carl Byström's talk follows said approach.
You just have to make a sequential gets or posts. When you define your task do something like this:
#task(10)
def my_task(l):
l.client.get('/X')
l.client.get('/Y')
There's an option to create a custom task set inherited from TaskSequence class.
Then you should add seq_task decorators to all task set methods to run its tasks sequentially.
https://docs.locust.io/en/latest/writing-a-locustfile.html#tasksequence-class