Dynamically Changing Distribution in AnyLogic - simulation

I am using AnyLogic to develop a model.
I used the 'distribution' element to initialize values for a parameter in my model. It is working fine, but I want to update these values as my simulation proceeds forward. e.g. if in week 1, the distribution can have values:
Distribution
But in week 2, I wan to update these values, then again in each coming week.
I have some equation based on which I want to make calculations and update these values.
I could not find any functionality in AnyLogic concerning this.
Any ideas how to achieve this?

You may create distribution from scratch, using various constructors. Pass into constructor array with existing and additional values to get the updated custom distribution. Your distribution is created with this constructor:
CustomDistribution(double[] intervalStarts, int[] numberOfObservations, Agent owner)
It may be convenient to store initial array in database, and each next array in model variable.

Related

Problems with having 2 dim dates in ssas/power bi?

I have a basic model with a fact table, and 2 dimensions (one of them is Date dimension).
Now, a new column with a date has been added to the fact table… Therefore I have created a second ‘Dim Date’ and connected to it:
I have the next doubts:
Can I have any problem in my .pbix or cube if I use 2 dim dates?
Shall I mark this new ‘dim date’ also as ‘Mark as date table’? can I have 2 tables marked as date table?
This new 'Dim Date' shall be used only as a filter in the pbix, I dont plan on using any time intelligence on it...
It depends:
The analysis services tabular engine that power bi runs on supports multiple connections between tables. I would generally recommend using this with the USERELATIONSHIP() function and then your measures will give context to the report.
However, I have found there are situations where using USERELATIONSHIP() in many measures can introduce unnecessary complexity in your model. You can end up with far too many measures and it can get confusing when you use two measures that are using two different relationships in the same visual.
In short: There is not anything inherently wrong with duplicating a dimension but for data storage optimization and model cleanliness I would be sure USERELATIONSHIP() with multiple relationships between fact and dimension will NOT work before duplicating the dimension.

Modifying AFL to include a new variable for the Fuzzer to consider in seed selection

I am looking on understanding how AFL implements its seed selection. To my understanding,afl-fuzz.c has a function called has_new_bits which returns values in identifying if the result of input creates a new path, new edge or if it is not an interesting branch we are considering. So my question is this, given that I am able to insert some lines of codes that allowed me to insert variables such as a counter, which I can insert other line of codes that will increment it in a given branch, how do I modify the AFL such that it is able to detect this?
In AFL++, you can affect the coverage bitmap directly using __afl_coverage_interesting. You can for instance compute the val parameter using the value of your counter (but remind that val is u8).
Another way, is to use FuzzFactory, a modified version of AFL that allow the user to define custom coverage metrics. In their paper the authors discuss one of the possible coverage metrics that FuzzFactory can use, that is validity. With validity, the fuzzer select with more probability valid inputs. You can hack around it and make a FuzzFactory version that focus on inputs triggering unsafe code instead of valid inputs.

How can I keep record (or see) of what value was given by the distribution each time the process took place?

I have beeen constructing a model in Anylogic in the last weeks, and I am currently simulating the time a truck takes to deliver its products, so I used a delay for this in which different parameters are multiplied to several distributions. Is there any was I can keep track of the value of the distribution each time the process takes place. The following is in example:
normal(2, 8, 4.67, 1.96)*DropSize
The DropSize is my parameter, but I wish to know what value was generated for the normal distribution, and keep track of this.
Sure, several ways (as usual with AnyLogic :-) ). Here is one:
create a collection of type ArrayList:
Then, create a function that draws the random value, stores it in the collection and returns it as below:
Last, replace your code creating the random value with calling that function. Now, whenever you pull a value from the distribution, it is also stored in the collection.
cheers

Creating Dynamic Filters in Tableau

I'm working in Tableau to help my school district visualize discipline data. I want to be able to disaggregate and filter by quite a few different measures (at least 13).
In the past, if I wanted to be able to disaggregate by a number of measures, I would make a parameter with a list of possible outputs, display each output as the name of a measure, then create a calculated field that returned the value from a given measure based on that parameter. This works fine for disaggregating.
However, filtering based on these values presents a challenge. The problem is that I'm not filtering based on any given measure, I'm filtering on a calculated field that returns the value in that measure. If my parameter is set to "Day" for instance, and I filter to Tuesday, but then switch to "Race", everything vanishes, because now my calculated field is returning race. What I want to create is a dropdown menu that lets you select from a number of different measures to filter by.
Below is a link to a packaged workbook that can help illustrate the problem that I'm dealing with.
I feel like something like this should be possible in Tableau, but there's some little trick that I'm missing. When I contacted their support team, their solutions were both only viable due to the limited number of measures I was using in the dummy data. The support team felt that this was possible as well, but they didn't know how.
https://public.tableau.com/profile/publish/DynamicFiltersUsingParameters/Sheet1#!/publish-confirm
You could create an Filter Action on the Tableau dashboard which carries over the 'Day' filter to give a smaller subset of data to work with for the next filter.

Handing Arrays in soap webservice testing using fitnesse

Is there a way to dynamically create tables in wiki?
Usecase : I'm trying to mimic similar to soap sonar in fitnesse. SOAP SOANR 1. Once we import the wsdl, soap sonar generates inputs for operations in wsdl. 2. Choose a operation, Enter input and then execute the operation. 3. In case of arrays, we can select size of array and enter values in respective array.
Fitnesse 1. I'm able to achieve point 1 using soapui jars. 2. This i'm able to achieve using xmlhttptest fixture
I'm stuck in 3rd point. Is there a way i can do this in fitnesse? (My idea is from point 1, i can get sample input for each operation, from which i will get to know that there are arrays/complex types present in input.xml but how do we represent this in wiki dynamically?
Thanks in advance
What I've done in the past is use ListFixture (and MapFixture) to dynamically fill a List (and Map/Hashes for each element's properties) and then use these as input values to a XmlHttpTest's feature to create the body to be sent using a FreeMarker template (which allows iteration over a list, which I use to create elements in the array based on the list).
But this gets quite complex quickly. Is that level of flexibility truly required? I found that quite often hard coding the number of elements in arrays/lists in the wiki is simpler to do and makes the test far easier to understand/maintain.
I most cases I prefer to create a script (or scenario) with the right number of elements for the test case(s) in with the request in the wiki page. The use of scenarios allows me to test with different values (but the same number of elements). Another element count gets its own script/scenario.
Being able to dynamically change the number of elements is only worthwhile if you need to test for many different counts, otherwise the added complexity of dynamically creating the body is just not worth it.