I am batching agents using batch block .
My agent has parameter value. when I am batching it , I want to add the parameter values and assign it to new batch agent created.
agent1 - 50, agent2- 70, agent3 - 30 ;
If I want to batch these three agents then new aggregated agent should have a value of 150 ( sum of all agent's value)
on the on add action of your batch block you do this
batch.parameter+=agent.parameter;
Related
I want to use Apex Batch class to put 10,000 pieces of data into an object called A and use After Insert trigger to update the weight field value of 10,000 pieces of data to 100 if the largest number of weight fields is 100.
But now, if Batch size is 500, the number with the largest weight field value out of 500 data is applied to 500 data.
Of the following 500 data, the number with the largest weight field value applies to 500 data.
For example, if the weight field for the largest number of the first 500 data is 50,
Weight field value for data 1-50: 50
If the weight field for the largest number of the following 500 data is 100,
Weight field value for data 51-100: 100
I'm going to say that if the data is 10,000, the weight field is the largest number out of 10,000 data.
I want to update the weight field value of all data.
How shall I do it?
Here's the code for the trigger I wrote.
trigger myObjectTrigger on myObject_status__c (after insert) {
List<myObject_status__c> objectStatusList = [SELECT Id,Weight FROM myObject_status__c WHERE Id IN: Trigger.newMap.KeySet() ORDER BY Weight DESC];
Decimal maxWeight= [SELECT Id,Weight FROM myObject_status__c ORDER BY Weight DESC Limit 1].weight
for(Integer i=0;i<objectStatusList();i++){
objectStatusList[i].Weight = maxWeight;
}
update objectStatusList;
}
A trigger will not know whether the batch is still going on. Trigger works on scope of max 200 records at a time and normally sees only that. There are ways around it (create some static variable?) but even then it'd be limited to whatever is the batch's size, what came to single execute(). So if you're running in chunks of 500 - not even static in a trigger would help you.
Couple ideas:
How exactly do you know it'll be 10K? You're inserting them based on on another record? You're using the "Iterator" variant of batch? Could you "prescan" the records you're about to insert, figure out the max weight, then apply it as you insert, eliminating the need for update?
if it's never going to be bigger than 10K (and there are no side effects, no DMLs running on update) - you could combine Database.Stateful and finish() method. Keep updating the max value as you go through executes(), then in finish() update them 1 last time. Cutting it real close though.
can you "daisy chain". Submit another batch from this batch's finish. Passing same records and the max you figured out.
can you stamp the records inserted in same batch with same value, like maybe put the batch job's id into a hidden field. Then have another batch (daisy chained?) that looks for them, finds the max in the given range and applies to any that share the batch job id but not have the value applied yet
Set the weight in your finish method of the batch class, it runs once all batches have finished. Track the max weight record in a static variable in the class.
I have agents population as "MyAgents" and I am trying to update the parameters during the process flow. pic shows that I am updating parameter with value of 1
Then in the below simple function trying to sum it up but it is not giving any value.
I have traced the parameter value in the sink block and it can viewed there but when I am using function to sum up it is giving zero output.
My goal is to update parameters on the entry or exit of block.
Is here a way to manage this?
The problem is that you're updating the parameter once the agent enters the sink. After an agent enters the sink, it will be removed from the population. So the population myAgents will not contain the agent with the updated parameter value and thus the sum will always be zero.
Instead, I would suggest to create a global variable total_amount (e.g. in Main) and change the code "on enter" at the sink to (if the total_amount should be incremented by 1):
total_amount++;
Or if the total_amount should be incremented by a given value:
total_amount += <parameter_value>;
I need to split a huge dataset into multiple files and each file must not have more than 100 000 rows.
I don't know if this is possible with Data Flow and the conditional split?
If you want simply split by a fixed number of rows, I've created a simple test.
Declare a parameter inside the dataflow to store the row count of your source dataset. If your source dataset is Azure sql, you can use Lookup activity to get the max Row_No. If your source dataset is Azure storage, you can use Azure Function activity to get the max Row_No. Then pass the value to the parameter.
Here for test, set a static default value.
Then we can set Number of partitions expression $RowCount/10, if you want 10 lines per file.
We can set file names after division here.
My source dataset contains 50 lines, so ADF will split it to 5 files. Judging by the Id column, it has randomly taken 10 rows of data.
You can achieve this with 2 dataflows. 1 to get the row count and another to partition. This can also be achieved in 1 dataflow using a cache sink in the future.
My root job which has two steps,
Transformation Executor(to copy rows to results) & a Job Executor(Executing for each input row)
what I want is, that my sub-job should execute completely for first incoming row before it start execution for second row.
Click on the Job executor step and check the box Execute for every input row.
Tell me if it is not what you need.
Unless you specify a different value than 1 on Change Number Of Copies To Start (Right click on any Transformation Entry to see that option), that will always be the expected behavior.
If the number is greater than 1 then the Job Executor will have that number of copies running in parallel, distributing the input rows (for example, 100 input rows, with 10 copies, each copy will execute 10 rows no matter what).
new_Screenshot
Questions revised:
In my model, I have 10000 "Persons" as a type of agents at "Main" level. As shown new_Screenshot, there is a process like the statechart. "variable1" is determined by the process. For example, Person 1 will have 10 for the value of "variable1" while Person 2 will have 100 through the process. My question is how to obtain the values (e.g. Person 1: 10, Person 2: 100,.....Person 10000: 10) in AnyLogic.
Thank you.
previous version: My model has 10000 "Persons" as a type of agents. "Persons" have a statechart and a variable ("variable1" in the Screenshot) obtains a set of different values from the statechart. I am trying to collect all those values from a variable for all 10000 "Persons". How can I do this? I have tried to use traceln but it didn't work because I need the values and not the min, max, average, etc.
Thank you!
Screenshot
So the answer is the following:
If your agent is defined as an agent type, then you can't create a population of 10,000... to create a population of 10,000 you need to create an agent population, so I assume that's what you did, even though you say the opposite.
An element of a population of agents can be accessed the same way as any collection using:
persons.get(N); where N is any integer between 0 and 9999.
If you want to access a variable in that particular agent:
persons.get(N).variable1