Lookup DB value from anouther table and change slightly - filemaker

I am trying to create a job management system using FileMaker Pro 14 advanced and FM Starting Point 4.6.
I have only just started looking at it so am pretty much a noob with it. Basically FM Starting Point has Projects and Estimates, Each Project can have multiple Estimates. What I have done is set it so that you have to assign a Project to a Estimate before it will save. I have used the Looked-up value in the database management to fill the Estimate ID with The Project ID that has been assigned.
Project numbers start with an A and 6 digits so for example A120000. What I want to do is change the A to a Q when the Project ID is set as Estimate ID.
Can any one point me towards the right methods.

Have a look at calculated fields. I would probably have one field for the digits and one for the flag, then concatenate these in a calculated field for display.
Then set up a script that triggers on change in a field, which sets the flag to the other value.
More info about calculated fields: http://www.filemaker.com/help/12/fmp/html/create_db.8.17.html
More info about script triggers: http://www.filemaker.com/help/13/fmp/en/html/create_layout.9.61.html

Related

How do I group dates in IBM COGNOS when building a front end dashboard

I am an extreme newbie to COGNOS (5 days give or take a few hrs) and have been thrown in the deep end.
I have a data package i am working with and the only date field I can use has the property of timestamp. I have looked through other questions that are similar and i have used the "CAST" function to turn the field into a date so this has worked.
cast ( [TR_Logical Layer].[Fact Event].[Period Datetime],DATE) - so I was happy with that.
However now when I am building the graph, I am getting repeated dates - and I can only think that this is because somehow it is still holding onto the timestamp portion of the field.
Please how can i get all of my many dates for say the 21st Jul to be one?
Possible issue: Repeating is probably due to how it is grouped (down to the time which would have the day repeating)
Suggested solution: Add a determinant to control granularity and grouping
Here are the steps
https://www.ibm.com/docs/en/cognos-analytics/11.1.0?topic=determinants-specifying
Click the query subject you want, and click Actions, Edit Definition.
Click the Determinants tab.
Click Add under the Determinants box.
The entry New Determinant displays in the box. To give this entry a meaningful name, right-click it, and click Rename.
To define a key, right-click a query item in the Available items box and click Add as Key. Tip: You can also drag query items to the Key box.
To identify which query items should be associated with this determinant, right-click query items in the Available items box, and click Add as Attributes.
Tip: You can also drag query items to the Attributes box.
You can have a determinant with no attributes defined for it. Framework Manager uses this type of determinant to indicate which query items are indexed.
To specify that the selected determinant should be used as the unique identifier, select the Uniquely Identified check box.
Do this only if the data in this item is unique for every row in the underlying data source. You can specify more than one unique determinant if they are truly unique. At query time, the relationship being used will determine which unique determinant to use.
Select the Group By check box to indicate that when keys or attributes associated with that determinant are repeated in the data, IBM® Cognos® Analytics should apply aggregate functions and grouping to avoid double-counting.
If you want to change the order of the determinants, use the arrow buttons.
Determinants are processed in the order in which they are specified in the model.
Click OK

Get the distinct count of applications used by users who used application A in Tableau

I have a data set of users and the applications they used. For each application, I want to compute the total number of users who used only this application, who used this application + another application, +2 other applications,... At the end I need to get a treemap with where each square represents the total number of users who used only the application, the application + 1 application, + 2 applications,......and also compute the overlap percentage, that is how many users used only this application vs how many users used this application with another application. I could compute these metrics if I don't add the application constraint, that is compute the number of users who used one application, 2 applications, 3 applications, ..... and the overlap percentage was the percentage of users who used more than one application.
To do so,
I created a calculated field to count the total number of applications per user
Then I used this field to create bins, which gave me the total number of users who used 1 app, 2 apps, ..... N apps
I then ended up creating a treemap with those bins where the size of each square is the countD(users).
How to do the same but instead of only having (All) applications, be able to do it with only one application and their companion applications. I need an application selector, where each time I change the application, the treemap changes.
As you have not given sample data, I am proposing a solution assuming -
No rows are duplicate i.e. every user is associated with any unique application not more than once.
application types may not vary more than 15-20 (in that situation the solution may not work given the calculation range of tableau)
I have taken a sample data as follows-
Step-1 Create calculated field (CF) named as CF as -
// map every application with a number
CASE [Apl id]
WHEN 'apl1' THEN 1
WHEN 'apl2' THEN 2
WHEN 'apl3' THEN 3
WHEN 'apl4' THEN 4
WHEN 'apl5' THEN 5
END
You may add as many values as you have (all unique values of apl)
Step-2 add another CF CF1 as
// binary representation of each application used
{FIXED [User id]: sum(10^([CF]-1))}
Step-3 Create a parameter as
Step-4 add another CF CF2 as
// checking which user uses selected application mandatorily
If
LEFT(RIGHT(RIGHT('00000'+STR([CF1]),5), [Parameter 1]),1) = '1'
then 'Y' else 'N' end
Now you may build your view as shown below
Selecting parameter value will filter out those records where user is at least associated with selected value. Distinct count value will give you count of total applications used by these users. Now you can proceed to build your treemap. Please tell if any other help/explanation in the solution is required.
Note-1. This solution is proposed on binary logic (and hence the restriction). If you will see sum(CF1) values will give you a binary representation of applications used by that user where rightmost placeholder denotes that user is using apl1 yes (1) or not (0); and similarly onwards right to left.
Note-2: While calculating CF2 I have padded the CF1 values with leading zeros at most 5. This you have to change according to total count of your applications.

How to get all missing days between two dates

I will try to explain the problem on an abstract level first:
I have X amount of data as input, which is always going to have a field DATE. Before, the dates that came as input (after some process) where put in a table as output. Now, I am asked to put both the input dates and any date between the minimun date received and one year from that moment. If there was originally no input for some day between this two dates, all fields must come with 0, or equivalent.
Example. I have two inputs. One with '18/03/2017' and other with '18/03/2018'. I now need to create output data for all the missing dates between '18/03/2017' and '18/04/2017'. So, output '19/03/2017' with every field to 0, and the same for the 20th and 21st and so on.
I know to do this programmatically, but on powercenter I do not. I've been told to do the following (which I have done, but I would like to know of a better method):
Get the minimun date, day0. Then, with an aggregator, create 365 fields, each has that "day0"+1, day0+2, and so on, to create an artificial year.
After that we do several transformations like sorting the dates, union between them, to get the data ready for a joiner. The idea of the joiner is to do an Full Outer Join between the original data, and the data that is going to have all fields to 0 and that we got from the previous aggregator.
Then a router picks with one of its groups the data that had actual dates (and fields without nulls) and other group where all fields are null, and then said fields are given a 0 to finally be written to a table.
I am wondering how can this be achieved by, for starters, removing the need to add 365 days to a date. If I were to do this same process for 10 years intead of one, the task gets ridicolous really quick.
I was wondering about an XOR type of operation, or some other function that would cut the number of steps that need to be done for what I (maybe wrongly) feel is a simple task. Currently I now need 5 steps just to know which dates are missing between two dates, a minimun and one year from that point.
I have tried to be as clear as posible but if I failed at any point please let me know!
Im not sure what the aggregator is supposed to do?
The same with the 'full outer' join? A normal join on a constant port is fine :) c
Can you calculate the needed number of 'dublicates' before the 'joiner'? In that case a lookup configured to return 'all rows' and a less-than-or-equal predicate can help make the mapping much more readable.
In any case You will need a helper table (or file) with a sequence of numbers between 1 and the number of potential dublicates (or more)
I use our time-dimension in the warehouse, which have one row per day from 1753-01-01 and 200000 next days, and a primary integer column with values from 1 and up ...
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
Ok... so you could override your source qualifier to achieve this in the selection query itself (am giving Oracle based example as its what I'm used to and I'm assuming your data in is from a table). I looked up the connect syntax here
SQL to generate a list of numbers from 1 to 100
SELECT (MIN(tablea.DATEFIELD) + levquery.n - 1) AS Port1 FROM tablea, (SELECT LEVEL n FROM DUAL CONNECT BY LEVEL <= 365) as levquery
(Check if the query works for you - haven't access to pc to test it at the minute)

Zscore with Rolling Window Panel Data

I am trying to calculate the zscore with Rolling window. I need to actually calculate standard deviation for a 3 year rolling window to calculate z-score. A minimal working example is given below:
use http://dss.princeton.edu/training/Panel101.dta
xtset country year
rolling sd_x1=r(sd), step(1) window(3) saving(sd_x1, replace) keep(year): sum x1, detail
Now after this I need to merge it back with the original file. But the variable year does not appear but a column name date appears with all missing values. I am trying to merge it using the following command:
merge 1:1 country year using sd_x1
However, I get the error that variable year is not found and actually this variable is not kept while running the rolling command. Any help will be much appreciated.
I am always surprised that people have interest or faith in standard deviations based on three values.
A more direct approach would be to use rangestat (SSC). The syntax could be something like
use http://dss.princeton.edu/training/Panel101.dta
xtset country year
rangestat (sd) sd=x1, interval(year 0 2) by(country)
except that I cannot test this at the moment.
The key difference here is that rangestat produces new variables in the current dataset. Search the Statalist archives for examples of rangestat use.
Note that in your example the detail option is unnecessary as summarize by itself produces standard deviations.
You can extend this approach to get the mean at the same time.

Making a Histogram in Tableau

I work for a software company, and I am working with a database that tracks certain events that occur in one of our games. Every time one of the tracked events occurs, a text entry in the “Event Type” field specifies what kind of event it is – “User Login,” “Enemy Killed,” “Player Death,” etc. Another field, “Session ID,” assigns a unique ID number to each individual game session. So if a user logs in to the game, kills eight enemies, and then logs out again, each of those Enemy Killed events will have the same Session ID.
I’m trying to make a histogram showing the number of sessions that have x number of Enemy Killed events. How do I go about this? I’m a raw beginner at Tableau, so if you can dumb down your answer to the explain-like-I’m-five level that would be great.
Tableau 9.0 has been launched, and your problem can be solved entirely inside Tableau.
What you need is to understand the Level of Detail calculations. It will look like this:
{ FIXED [Session ID] : COUNT( IF [Event Type] = 'Enemy Killed'
THEN 1
END )
}
This will calculated how many kills each session had. You can create BINS with this field, and count how many sessions there are (COUNTD([Session ID]))
Well, my answer will echo with many of my last answers. Your database is not ready to do that analysis.
Basically what your database should look like is:
SessionId EnemiesKilled
1234 13
So you could create a histogram on EnemiesKilled.
To do the histograms, you can create BINs (right click on field, Create Bins), but I find it very limited, as it only creates BINS of the same width. What I usually do is a bunch of IF and ELSEIF to manually create the BINs, to better suit my purposes.
To convert your db to the format I explained, it's better if you can manipulate it outside Tableau and connect to it directly. If it's SQL, a GROUPBY Session ID, and COUNT of EnemyKilled Events should work (not exactly like this but that's the idea).
To do it on Tableau, you can drag SessionId (to either Marks or Rows, for this purpose of creating a table I usually put everything on Marks and choose Bar chart, so Tableau won't waste time plotting anything) and a calculated field like:
SUM(
IF EventType = "Enemy Killed"
THEN 1
ELSE 0
END
)
Then export the data to a csv or mdb and then connect to it