I am simulating an evacuation and would like to pick up people with busses. I would like to select "exact quantity(wait for)" because partially empty busses would be highly inefficient. However, the problem Im running into is that some people may be left behind if the last group is smaller than the specified bus capacity. The bus will then not leave, because it's not filled up.
Does anybody know a way of using conditioning to get around the problem? I can't just modify the total amount of waiting people to fill all the busses. This is because I have different groups of people entering different types of vehicles.
something like
exact quantity (wait for) - IF "waiting area" contains > 12 agents
quantity (if available) - IF "waiting area" contains ≤ 12 agents
Thanks
Keep your Pickup blocks with "exact quantity (wait for)"; the quantity is a dynamic property (re-evaluated every time an agent enters the Pickup block), so you can track the number remaining to pickup in a variable (set once you know how many are to pickup in total, and decremented for every pickup) and use a conditional statement (Java ternary expression) in your Pickup block quantity.
If your buses take 12 passengers as in your question, and your left-to-pickup is an int variable called leftToPickup, the expression would be
leftToPickup < 12 ? leftToPickup : 12
(read as 'if leftToPickup is less than 12, the quantity expression evaluates to leftToPickup, otherwise it evaluates to 12').
Screenshots of a 'minimal example' model to do this below.
Related
Require assistance in calculating the Total Active Users from March 16 2020 to Feb 16 2020.
I have tried using calculated fields, but not getting the correct results. Please advise.
Thank you,
Nirmal
To find the number of unique values that appear in a field, say [user_code], you can use the COUNT DISTINCT function, COUNTD() as in COUNTD([user_code])
To restrict the data to a particular time range, one way is put your date field on the Filter shelf and choose the settings that include only the data rows you want — say the range from 2/16 to 3/16 as you stated.
Alternatively, you can push the filtering condition into the calculation with an IF function call, as in COUNTD(IF <data is relevant> THEN [user_code] END) Thus effectively combining the two techniques. That works because if there is no ELSE clause and the IF condition is False then the IF statement evaluates to null. Since COUNTD() silently ignores nulls, like other aggregation functions, the expression acts as if the irrelevant data rows were filtered.
So, for example,
COUNTD(IF [dates] >= #2/16/2020# AND [dates] <= #3/16/2020# THEN [user_code] END)
Will tell you then number of unique user codes during the period between 2/16 and 3/16. The DateDiff() function will probably be useful in more elaborate tests.
Finally, what if you want more flexibility? You could easily use Parameters or Filter controls to let the user choose the date range interactively.
If you want this calculation repeated for each possible day, showing the unique users in the preceding 30 day period, as some sort of rolling calculation, then you’ll need to learn about some more advanced features. Either multiple calculations as above for different time ranges, using Table Calculations, or some data prep and/or data padding with Tableau Prep Builder, Python or some other technique — mostly because in that scenario each data row contributes to multiple rolling counts, rather than one count when partitioning the data by some dimension.
The problem I'm trying to solve
Get stock ticks
Always consider latest stock price
Each x second take a snapshot of ticks and send for processing
So I have an Observable source of stock ticks. It sends only the ticks for stocks I'm interested in. What I need to do is to receive these stock prices, and after each x seconds (for the sake of example let's say every 3 seconds) send a snapshot of prices for processing. If within 3 seconds I receive 2 ticks for the same stock, I only need the latest tick. This processing is compute heavy, so if possible I would like to avoid sending same stock price for processing twice.
To bring an example.
Let's say at the beginning of sequence I received 2 ticks -> MSFT:1$, GOOG:2$.
In the next 3 seconds I receive nothing, so MSFT & GOOG ticks should be sent for processing.
Now the next second I receive new ticks -> MSFT:1$, GOOG:3$, INTL:3$
Again let's assume within next 3 seconds nothing comes in.
Here, since MSFT price didn't change (it's still 1$), only GOOG & INTL should be sent for processing.
And this repeats throughout a day.
Now I think Rx helps to solve this kind of problems in easy & elegant way. But I'm having a problem to have the proper queries.
This is what I have so far, will try to explain what it does and what's the issue with it
var finalQuery =
from priceUpdate in **Observable<StockTick>**
group priceUpdate by priceUpdate.Stock into grouped
from combined in Observable.Interval(TimeSpan.FromSeconds(3))
.CombineLatest(grouped, (t, pei) => new { PEI = pei, Interval = t })
group combined by new { combined.Interval } into combined
select new
{
Interval = combined.Key.Interval,
PEI = combined.Select(c => new StockTick(c.PEI.Stock, c.PEI.Price))
};
finalQuery
.SelectMany(combined => combined.PEI)
.Distinct(pu => new { pu.Stock, pu.Price })
.Subscribe(priceUpdate =>
{
Process(priceUpdate);
});
public class StockTick
{
public StockTick(string stock, decimal price)
{
Stock = stock;
Price = price;
}
public string Stock {get;set;}
public decimal Price {get;set;}
}
So this gets the stock price, groups it by stock, then combines latest from this grouped sequence with Observable.Interval. This way I'm trying to ensure only latest ticks for a stock are processed and it fires up every 3 seconds.
Then again it groups it up by interval this time, as a result I have group of sequences for each 3 second intervals that passed.
And as a last step, I flatten this sequence to sequence of stock price updates using SelectMany and also I'm applying Distinct to ensure same price for the same stock is not processed twice.
There are 2 issues with this query I don't like. First is I don't really like double group by's - is there any way to avoid it ? Second - with this approach I have to process prices one by one, what I really would like to have is snapshots - that is within 3 seconds whatever I have I will bundle and send for processing, but can't figure out how to bundle.
I'll be happy for suggestions to solve this problem other way, but I would prefer to stay within Rx, unless there is really something much much better.
A couple things:
You'll want to take advantage of the Sample operator:
You probably want DistinctUntilChanged instead of Distinct. If you use Distinct, then if MSFT goes from $1, to $2 then back to $1, you won't get an event on the third tick.
I imagine your solution will look something like this:
IObservable<StockTick> source;
source
.GroupBy(st => st.Stock)
.Select(stockObservable => stockObservable
.Sample(TimeSpan.FromSeconds(3))
.DistinctUntilChanged(st => st.Price)
)
.Merge()
.Subscribe(st => Process(st));
EDIT (Distinct performance problems):
Each Distinct operator has to maintain within it, the full distinct history. If you have a high-priced stock, for example AMZN, which so far today has ranged from $958-$974, then you could end up with a lot of data. That's ~1600 possible data points that have to sit in memory until you unsubscribe from the Distinct. It also will eventually degrade performance, as each AMZN tick has to be compared to the 1600-ish present data points before going through. If this is for a long-running process (spanning multiple trading days), then you'll end up with even more data points.
Given N stocks, you have N Distinct operators that need to operate accordingly. Multiply that behavior by N stocks, and you have an ever-increasing problem.
I am making reports with BIRT. I have table with "item planed quantity", "finished quantity", and percentage. Something like this Planed 50 , Finished 20, Percentage 40%... and etc.
and after this line i have sum of 'planed quantity', sum of 'Finished quantity'
and ave of percentage. the main problem is that this ave formula is taking percentage whic is equals to 0.
Anyone know how to set filter or expresion like "measure["percent"] != '0'" or something like this?
for the record, 192.168... is a purely internal IP. Unless someone is on your LAN, they won't have access to it. This means that nobody on the WAN can see the link you posted.
As to your issue - here are a few things to try:
1. Check your data type. I think integers round down so AVE( 12/13 ) will equal 0 instead of 0.923.
2. If your wanting to avoid zero values in your average, you'll need to gather counts of non-zeros which requires some scripting. You'll probably be best off creating a scripted data set for this.
See if you can take a screenshot and attach it to this issue if neither of these ideas work or if you need further assistance (particularly with the second option I suggested).
Thanks!
I work for a software company, and I am working with a database that tracks certain events that occur in one of our games. Every time one of the tracked events occurs, a text entry in the “Event Type” field specifies what kind of event it is – “User Login,” “Enemy Killed,” “Player Death,” etc. Another field, “Session ID,” assigns a unique ID number to each individual game session. So if a user logs in to the game, kills eight enemies, and then logs out again, each of those Enemy Killed events will have the same Session ID.
I’m trying to make a histogram showing the number of sessions that have x number of Enemy Killed events. How do I go about this? I’m a raw beginner at Tableau, so if you can dumb down your answer to the explain-like-I’m-five level that would be great.
Tableau 9.0 has been launched, and your problem can be solved entirely inside Tableau.
What you need is to understand the Level of Detail calculations. It will look like this:
{ FIXED [Session ID] : COUNT( IF [Event Type] = 'Enemy Killed'
THEN 1
END )
}
This will calculated how many kills each session had. You can create BINS with this field, and count how many sessions there are (COUNTD([Session ID]))
Well, my answer will echo with many of my last answers. Your database is not ready to do that analysis.
Basically what your database should look like is:
SessionId EnemiesKilled
1234 13
So you could create a histogram on EnemiesKilled.
To do the histograms, you can create BINs (right click on field, Create Bins), but I find it very limited, as it only creates BINS of the same width. What I usually do is a bunch of IF and ELSEIF to manually create the BINs, to better suit my purposes.
To convert your db to the format I explained, it's better if you can manipulate it outside Tableau and connect to it directly. If it's SQL, a GROUPBY Session ID, and COUNT of EnemyKilled Events should work (not exactly like this but that's the idea).
To do it on Tableau, you can drag SessionId (to either Marks or Rows, for this purpose of creating a table I usually put everything on Marks and choose Bar chart, so Tableau won't waste time plotting anything) and a calculated field like:
SUM(
IF EventType = "Enemy Killed"
THEN 1
ELSE 0
END
)
Then export the data to a csv or mdb and then connect to it
I have a report that I've written and I understand how to create running totals and such, but need help creating a custom evaluation formula.
I have two levels of groups, first group is based upon a certain user, the next group is based upon transactions that user has been involved in. I have details hidden, and am only interested in the totals for a particular activity. This is working great, and totals are working properly but the problem is, each activity has a 'line number', which essentially can be the same as another activity (ie: two activities can have lines 1, 2, 3 contained within), so doing a distinctive total based upon a set of data isn't accurate because I only want it to be distinct based upon each individual recordset, and not globally.
The example is below... if I do a count on each record for this dataset, it comes out to 18 because there are duplicate line numbers on each... but if I do distinct, it only comes to 9 because of duplicate line numbers across multiple actives.
I guess what I need to know is how I can take the totals per detail group, and have them total up in my second footer properly. I assume it's going to take me compiling together a string including the activity number and line number, and then comparing them?
Here is an example of the data contained within the total groupings:
I figured this out on my own... turned out it was pretty simple. I converted my numeric values to text, and included a copy of the transaction id and the line id as my test value, and did distinct on that... Sometimes it just helps not staring the problem down.