Anylogic: Time depending stock in-flow with table functions - anylogic

I would like to simulate a stock (array with six different dimensions) and its changes over the years in Anylogic. The initial values of the stock are given as well as the changes over the years (Excel table format). I think the values of the stock changes can't be describes by a constant mathematical function. So, they need to be implemented in the model as a table.
My approach was to create the stock and also the in-flow as an array with the six dimensions. I entered the initial values in the stock (inital values of stock). After that, I created six table functions with the number of the year as the argument and the change value of the stock in this year in the value column (example of a table funcion). To get the particular in-flow for the current simulation year, I entered the table function in the in-flow and set a variable as an argument (inflow of the stock). For the variable I entered a function, which I found on the Anylogic help website. It is the following: int getYear(Date date) (current year variable).
So as my understanding is right, this variable should give out the current year of the simulation as an integer. Using this value as a argument for the table function, I was thinking the model should work, but it seams that there is a problem the variable for the current year. I hope this was understandable. If not, please don't hesitate to ask.
Do you have any idea what could be the mistake or what I can do better to create a working model?
Thank you very much for your help.
Have a nice day and stay save
David

I've been looking for an answer to the same situation as yours, but wasn't able to find the clear guide for it.
In my case, I added an 'event' variable from the agent-based simulation palette with Trigger type == 'Timeout', Mode == 'Cyclic', and both First occurrence and recurrence time as 1 (in 'years' in your case).
Then, try to set actions for the event variable as what you intended:
Veraenderung_Lkw_Flotte [Nicht_Autom] = Veraenderung Nicht Autom pro Jahr(time());
Veraenderung_Lkw_Flotte [AF_Stufe_1] = Veraenderung Stufe 1 pro Jahr(time());
Veraenderung_Lkw_Flotte [AF_Stufe_2] = Veraenderung Stufe 2 pro Jahr(time());
Here, the time() returns the current model time. You may need to use different specification depending on your time specification.
Hope you find it helpful.

Related

How to get all missing days between two dates

I will try to explain the problem on an abstract level first:
I have X amount of data as input, which is always going to have a field DATE. Before, the dates that came as input (after some process) where put in a table as output. Now, I am asked to put both the input dates and any date between the minimun date received and one year from that moment. If there was originally no input for some day between this two dates, all fields must come with 0, or equivalent.
Example. I have two inputs. One with '18/03/2017' and other with '18/03/2018'. I now need to create output data for all the missing dates between '18/03/2017' and '18/04/2017'. So, output '19/03/2017' with every field to 0, and the same for the 20th and 21st and so on.
I know to do this programmatically, but on powercenter I do not. I've been told to do the following (which I have done, but I would like to know of a better method):
Get the minimun date, day0. Then, with an aggregator, create 365 fields, each has that "day0"+1, day0+2, and so on, to create an artificial year.
After that we do several transformations like sorting the dates, union between them, to get the data ready for a joiner. The idea of the joiner is to do an Full Outer Join between the original data, and the data that is going to have all fields to 0 and that we got from the previous aggregator.
Then a router picks with one of its groups the data that had actual dates (and fields without nulls) and other group where all fields are null, and then said fields are given a 0 to finally be written to a table.
I am wondering how can this be achieved by, for starters, removing the need to add 365 days to a date. If I were to do this same process for 10 years intead of one, the task gets ridicolous really quick.
I was wondering about an XOR type of operation, or some other function that would cut the number of steps that need to be done for what I (maybe wrongly) feel is a simple task. Currently I now need 5 steps just to know which dates are missing between two dates, a minimun and one year from that point.
I have tried to be as clear as posible but if I failed at any point please let me know!
Im not sure what the aggregator is supposed to do?
The same with the 'full outer' join? A normal join on a constant port is fine :) c
Can you calculate the needed number of 'dublicates' before the 'joiner'? In that case a lookup configured to return 'all rows' and a less-than-or-equal predicate can help make the mapping much more readable.
In any case You will need a helper table (or file) with a sequence of numbers between 1 and the number of potential dublicates (or more)
I use our time-dimension in the warehouse, which have one row per day from 1753-01-01 and 200000 next days, and a primary integer column with values from 1 and up ...
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
Ok... so you could override your source qualifier to achieve this in the selection query itself (am giving Oracle based example as its what I'm used to and I'm assuming your data in is from a table). I looked up the connect syntax here
SQL to generate a list of numbers from 1 to 100
SELECT (MIN(tablea.DATEFIELD) + levquery.n - 1) AS Port1 FROM tablea, (SELECT LEVEL n FROM DUAL CONNECT BY LEVEL <= 365) as levquery
(Check if the query works for you - haven't access to pc to test it at the minute)

Tableau comparison calculation between integer dates

I'm currently struggling with a calculation I'm trying to create in Tableau so any help you can provide would be great.
Basically I have a calculated field within Tableau called [ExampleCount] which is a count distinct based on a simple Yes/No condition.
I have this information displayed on two separate sheets in a dashboard, one filtered for the current activity month and one for the previous.
What I now need to do is have another sheet with the same calculation of [ExampleCount] but showing the difference between the current/previous months.
So: [ExampleCount (This Activity Month)] - [ExampleCount (Previous Activity Month)]
The Activity month is an integer value, currently ranging from 1 - 9.
I feel like this should be a simple calculation but I've tried several different methods and have been unable to come up with anything conclusive.
It would also be good if this could change periodically.
Kind Regards,
Plain_Lazy

Zscore with Rolling Window Panel Data

I am trying to calculate the zscore with Rolling window. I need to actually calculate standard deviation for a 3 year rolling window to calculate z-score. A minimal working example is given below:
use http://dss.princeton.edu/training/Panel101.dta
xtset country year
rolling sd_x1=r(sd), step(1) window(3) saving(sd_x1, replace) keep(year): sum x1, detail
Now after this I need to merge it back with the original file. But the variable year does not appear but a column name date appears with all missing values. I am trying to merge it using the following command:
merge 1:1 country year using sd_x1
However, I get the error that variable year is not found and actually this variable is not kept while running the rolling command. Any help will be much appreciated.
I am always surprised that people have interest or faith in standard deviations based on three values.
A more direct approach would be to use rangestat (SSC). The syntax could be something like
use http://dss.princeton.edu/training/Panel101.dta
xtset country year
rangestat (sd) sd=x1, interval(year 0 2) by(country)
except that I cannot test this at the moment.
The key difference here is that rangestat produces new variables in the current dataset. Search the Statalist archives for examples of rangestat use.
Note that in your example the detail option is unnecessary as summarize by itself produces standard deviations.
You can extend this approach to get the mean at the same time.

Apply matlab function to large table grouped by variables

I have a large table in Matlab of 7 variables and about 2 million rows. The first columns/variable has Ids, the second has dates, and the 3rd variable has prices. For each Id and each date I want to check whether the price was above 100 in each of the previous 6 days. I have a solution but it's very slow, so I would like ideas for improving speed. My solution is the following (with some toy data):
Data = table(reshape(repmat(1:4,3000,1),12000,1),repmat(datestr(datenum(2001,01,31):1:datenum(2009,04,18)),4,1),normrnd(200,120,12000,1),...
'VariableNames',{'ID','Date','Price'});
function y=Lag6days(x)
y=zeros(size(x));
for i=7:size(x,1)
y(i,1)=sum(x(i-6:i-1,1)>100)==6;
end
end
Func = #Lag6days;
A = varfun(Func,Data,'GroupingVariables',{'ID'},'InputVariables','Price');
Any suggestions?
This might have something to do with the table data structure - which I'm not really used to.
Consider the use of 'OutputFormat','cell', in the call of varfun, this seems to work for me.
Of course you would have to make sure that the grouping procedure of varfun is stable, so that your dates don't get mixed.
You could consider extracting each ID group into separate vectors by using:
A1 = Lag6days(Data.Price(Data.ID==1));
...
So you can have more control over your dates getting shuffled.
PS: Obviously your algorithm will only work if your prices are already sorted by date and there's exactly one price entry per day. It would be good practice to check for these assertions.

Tableau Future and Current References

Tough problem I am working on here.
I have a table of CustomerIDs and CallDates. I want to measure whether there is a 'repeat call' within a certain period of time (up to 30 days).
I plan on creating a parameter called RepeatTime which is a range from 0 - 30 days, so the user can slide a scale to see the number/percentage of total repeats.
In Excel, I have this working. I sort CustomerID in order and then sort CallDate from earliest to latest. I then have formulas like:
=IF(AND(CurrentCustomerID = FutureCustomerID, FutureCallDate - CurrentCallDate <= RepeatTime), 1,0)
CurrentCustomerID = the current row, and the FutureCustomerID = the following row (so it is saying if the customer ID is the same).
FutureCallDate = the following row and the CurrentCallDate = the current row. It is subtracting the future call time from the first call time to measure the time in between.
The goal is to be able to see, dynamically, how many customers called in for a specific reason within maybe 4 hours or 1 day or 5 days, etc. All of the way up until 30 days (this is our actual metric but it is good to see the calls which are repeats within a shorter time frame so we can investigate).
I had a similar problem, see here for detailed version Array calculation in Tableau, maxif routine
In your case, that is basically the same thing as mine, so you could apply that solution, but I find it easier to understand the one I'm about to give, I would do:
1) Create a calculated field called RepeatTime:
DATEDIFF('day',MAX(CallDates),LOOKUP(MAX(CallDates),-1))
This will calculated how many days have passed since the last call to the current. You can add a IFNULL not to get Null values for the first entry.
2) Drag CustomersID, CallDates and RepeatTime to the worksheet (can be on the marks tab, don't need to be on rows or column).
3) Configure the table calculation of RepeatTIme, Compute using Advanced..., partitioning CustomersID, Adressing CallDates
Also Sort by Field CallDates, Maximum, Ascending.
This will guarantee the table calculation works properly
4) Now you have a base that you can use for what you need. You can either export it to csv or mdb and connect to it.
The best approach, actually, is to have this RepeatTime field calculated outside Tableau, on your database, so it's already there when you connect to it. But this is a way to use Tableau to do the calculation for you.
Unfortunately there's no direct way to do this directly with your database.