I can't think of a better title, so feel free to make a suggestion once you understand the issue.
I was given a table to work with that I need to call from another table:
Name
Month
Type
Value
For each record in the main table I need to pull one "Value" that corresponds to it. What it is will be determined by all three of the other fields. So for example, if a record in the main table is:
Name:
Google
Date:
3\17\2016
Type:
M
Then I need to pull the value for the record in the other table where the Name is "Google", the month is "3", and the type is "M".
I was able to do this successfully (if slowly) using an ExecuteSQL command in a calculation field, with a ton of nested If statements for the names (I have yet to figure out how to input the record's data directly into the ExecuteSQL statement, it breaks when I try). I would prefer to just grab the data directly. I can't switch over to the other layout because I need to see all of the records at once. I can't do a simple relationship because there isn't a real relationship, it's like there are three foreign keys working in tandem and I only know how to use one to call the data.
Any idea on how to do this more simplistically?
Some ideas I've had but not sure if it will work:
Using a calculation field as a related field to dynamically point to the row by code (concatenate the three relevant fields into a type of code). Not sure if you can connect two tables by a calculation field.
Doing that same thing when calling the data into the table in the first place, adding a code to create a single primary key.
Here are my relationships:
I can't do a simple relationship because there isn't a real
relationship, it's like there are three foreign keys working in tandem
and I only know how to use one to call the data.
Simply define a relationship with three predicates - i.e. three pairs of match fields.
Related
The idea of the SaaS tool is to have dynamic tables with dynamic custom fields and values of different types, we were thinking to use "force.com/salesforce.com" example but is seems to be too complicated to maintain moving forward, also making some reports to create with a huge abstraction level, so we came up with simple idea but we have to be sure that this is kinda good approach.
This is the architecture we have today (in few steps).
Each tenant has it own separate database on the cluster (Postgres 12).
TABLE table, used to keep all of those tables as reference, this entity has ManyToOne relation to META table and OneToMany relation with DATA table.
META table is used for metadata configuration, has OneToMany relation with FIELDS (which has name of the fields as well as the type of field e.g. TEXT/INTEGER/BOOLEAN/DATETIME etc. and attribute value - as string, only as reference).
DATA table has ManyToOne relation to TABLES and 50 character varying columns with names like: attribute1...50 which are NULL-able.
Example flow today:
When user wants to open a TABLE DATA e.g. "CARS", we load the META table with all the FIELDS (to get fields for this query). User specified that he want to query against: Brand, Class, Year, Price columns.
We are checking by the logic, the reference for Brand, Class, Year and Price in META>FIELDS table, so we know that Brand = attribute2, Class = attribute 5, Year = attribute6 and Price = attribute7.
We parse his request into a query e.g.: SELECT [attr...2,5,6,7] FROM DATA and then show the results to user, if user decide to do some filters on it, based on this data e.g. Year > 2017 AND Class = 'A' we use CAST() functionality of SQL for example SELECT CAST(attribute6 AS int) AND attribute5 FROM DATA WHERE CAST(attribute6 AS int) > 2017 AND attribute5 = 'A';, so then we can actually support most principles of SQL.
However moving forward we are scared a bit:
Manage such a environment for more tenants while we are going to have more tables (e.g. 50 per customer, with roughly 1-5 mil per TABLE (5mil is maximum which we allow, for bigger data we have BigQuery) which is giving us 50-250 mil rows in single table DATA_X) which might affect performance of the queries, especially when we gave possibilities to manage simple WHERE statements (less,equal,null etc.) using some abstraction language e.g. GET CARS [BRAND,CLASS,PRICE...] FILTER [EQ(CLASS,A),MT(YEAR,2017)] developed to be similar to JQL (Jira Query Language).
Transactions lock, as we allow to batch upload CSV into the DATA_X so once they want to load e.g. 1GB of the data, it kinda locks the table for other systems to access the DATA table.
Keeping multiple NULL columns which can affect space a bit (for now we are not that scared as while TABLE creation, customer can decide how many columns he wants, so based on that we are assigning this TABLE to one of hardcoded entities DATA_5, DATA_10, DATA_15, DATA_20, DATA_30, DATA_50, where numbers corresponds to limitations of the attribute columns, and those entities are different, we also support migration option if they decide to switch from 5 to 10 attributes etc.
We are on super early stage, so we can/should make those before we scale, as we knew that this is most likely not the best approach, but we kept it to run the project for small customers which for now is working just fine.
We were thinking also about JSONB objects but that is not the option, as we want to keep it simple for getting the data.
What do you think about this solution (fyi DATA has PRIMARY key out of 2 tables - (ID,TABLEID) and built in column CreatedAt which is used form most of the queries, so there will be maximum 3 indexes)?
If it seem bad, what would you recommend as the alternative to this solution based on the details which I shared (basically schema-less RDBMS)?
IMHO, I anticipate issues when you wanted to join tables and also using cast etc.
We had followed the approach below that will be of help to you
We have a table called as Cars and also have a couple of tables like CarsMeta, CarsExtension columns. The underlying Cars table will have all the common fields for a ll tenant's. Also, we will have the CarsMeta table point out what are the types of columns that you can have for extending the Cars entity. In the CarsExtension table, you will have columns like StringCol1...5, IntCol1....5, LongCol1...10
In this way, you can easily filter for data also like,
If you have a filter on the base table, perform the search, if results are found, match the ids to the CarsExtension table to get the list of exentended rows for this entity
In case the filter is on the extended fields, do a search on the extension table and match with that of the base entity ids.
As we will have the extension table organized like below
id - UniqueId
entityid - uniqueid (points to the primary key of the entity)
StringCol1 - string,
...
IntCol1 - int,
...
In this case, it will be easy to do a join for entity and then get the data along with the extension fields.
In case you are having the table metadata and data being inferred from separate tables, it will be a difficult task to maintain this over long period of time and also huge volume of data.
HTH
I have a table called transactions. Within that is a field called ipn_type. I would like to create separate table occurrences for the different ipn types I may have.
For example, one value for ipn_type is "dispute". In the past I would create a global field called "rel_dispute" and I would populate that with the value of "dispute". Then I could create a new table occurrence of the transactions table, and make a relationship based on transactions::ipn_type = transactions::rel_dispute. This way only the dispute records would show up in my new table occurrence.
Not long ago, somebody pointed out to me that this is no longer necessary, and there is a simpler way to setup such a relationship to create a new table occurrence. I can't for the life of me remember how that was done, though.
Any information on this would be greatly appreciated. Thanks!
To show a found set of only one type, you must either perform a find or use the Go to Related Record script step to show only related records. What you describe as your previous setup fits the latter.
The simpler way is to perform a find - either on demand, or by a script triggered OnLayoutEnter.
The new 'easy' way is probably:
using one base relationship only and
filtering only the displaying portal by type. This can be done with a global field, a global variable containing current display type. Multiple portals with different filter conditions are possible as well.
~jens
My company uses a third-party vendor to get all of our NPS information. I'm trying to set up a data feed from this vendor into our data warehouse, which runs PostgreSQL.
The feed is in the form of 2 tab-separated text files: "question mapping" and the responses. The question map is one row per question, with columns for question id, question text, question label question type, etc - straightforward. The responses are one row per survey response, with a column for each question and stuff like user id, etc. Here are the 2 biggest problems:
The survey questions sometimes use the same question ID for different questions, resulting in multiple columns in the response data having the same name but not being the same question.
The number of questions could change, resulting in a different number of columns in the data.
Both of these things make it a real headache to automate a data feed into a single table.
I'm afraid I don't quite know how to phrase my real question other than, "Does anyone have any ideas how I can accomplish this?" If I think of something better than that, I'll come and update this, so for now:
Does anyone have any ideas at all about how I can efficiently set up my automated data feed without having to always drop and recreate everything?
If your data is a mess and doesn't really have well defined columns you can use the entity attribute value pattern, where you turn each fact into a set of rows with 4 columns - a unique row id, the same entity id for each row extracted from the map, an attribute column (where you put what would be the name of the column) you get from the key of the map, and a value column where you put the value from the map. It's not that neat but you can still query it and you won't have to drop it when you receive a map with a new column.
I'm trying to copy some field values to a duplicate database. One record at a time. This is used for history and so I can delete some records in the original database to keep it fast.
I don't want to manually save the values in a variable because there are hundreds of fields. So I want to go to the first field, save the field name and value and then go over to the other database and save the data. Then run a 'Go to Next Field' and loop through all the fields.
This works perfectly, but here is the problem: When a field is a calculation you cannot tab into it and therefore 'Go to Next Field' doesn't work. It skips it.
I though of doing a 'Go to Object' but then I need to name all the objects and I can't find a script to name objects.
Can anyone out there think of a solution?
Thanks!
This is one of those problems where I always found it easier to do an export/import.
Export all the data you want from the one database, and then import it into the other database. All you need to do is:
Manually specify which fields you want to copy
Map the data from the export to the right fields in the new database/table
You can even write a script to do these things for you.
There are several ways to achieve this.
To make a "history file", I have found there are several cases out there, so lets take a look.
CASE ONE
Single file I just want to "keep" a very large file with historical data, because I need to erease all data in my Main file.
In this case, you should create a "clone" table (in the same file ore in other file, is the same). Then change any calculation field to the type of the calculation result (number, text, date, an so on...). Remove any "auto entered value or calculation from any field, like auto number, auto creation date, etc..). You will have a "Plain Table" with no calculations or auto entered data.
Then add a field to control duplicate data. If you have lets say an invoice number (unique) for each record, you can do this to achieve this task. But if you do not have a unique field that identifies the record as unique, then you have to create one...
To create such a field, I recommed to add a new field on the clone table and set as an aunto entered calculation and make a field combination that is unique... somthing like this: invoiceNumber & "-" & lineNumber & "-" " & date.
On the clone table make shure that validation is set up for "always", and no empty values allowed and that this value is unique.
Once you setup the clone table... then you can import your records, making sure that the auto enty option is on. Yo can do it as many times as you like, new records will be added and no duplicates.
If you want, can make a Script to do the move to historical table all the current records before deleting them.
NOTE:
This technique works fine when the data you try to keep do not have changes over time. This means, once the record is created is has no changes.
CASE TWO
A historical table must be created but some fields are updated.
In the beginnig I thougth a historical data, never changes. In some cases I found this is not the case, like the case I want to track historical invoices but at the same time, keep track if they are paid or not...
In this case you may use the same technique above, but instead of importing data... you must update data based on the "unique" fields that identifiy the record.
Hope this technique helps
FileMaker's FieldNames() function, along with GetField() can give you a list of field names and then their values
I have a table like this:
Application,Program,UsedObject
It can have data like this:
A,P1,ZZ
A,P1,BB
A,P2,CC
B,F1,KK
I'd like to create a layout to show:
Application,# of Programs
A,2
B,1
The point is to count the distinct programs.
For the life of me I can't make this work in FileMaker. I've created a summary field to count programs resetting after each group, but because it doesn't eliminate the duplicate programs I get:
A,3
B,1
Any help much appreciated.
Create a a summary field as:
cntApplicaiton = Count of Application
Do this by going into define fields, create a field called cntApplication, type summary. In the options dialogue make the summary field a count on application
Now create a new layout with a subsummary part and nobody. The subsummary should be sorted on Application. Put the Application and cntApplication fields in subsummary. If you enter browse mode and sort by Application you ought to get the data you want.
You can also create a calc field with the formula
GetSummary(cntApplication; Application)
This will allow you to use the total number of Applications with in a record
Since I also generate the data in this form, the solution I've adopted is to fill two tables in FileMaker. One provides the summary view, the other the detailed view.
I think that your problem is down to dupliate records and an inadequate key.
Create a text field called "App_Prog". In the options box set it to an auto-enter calc, unchecking the 'Do not replace...' option, and use the following calc:
Application & "_" & Program
Now create a self join to the table using App_Prog as the field on both sides, and call this 'MatchingApps'.
Now, create (if you don't alread have one) a unique serial number field, 'Counter' say, and make sure that you enter a value in each record. (Find all, click in the field, and use serial number option in'Replace Field Contents...')
Now add a new calc field - Is_Duplicate with the following calc...
If (Counter = MatchingApps::Counter; "Master Record" ; "Duplicate")
Finally, find all, click in the 'Application field, and use 'Replace Field Contents...' with a calculation to force the auto-enter calc for 'App_Prog' to come up with a value.
Where does this get you? You should now have a set of records that are marker either "Master Record" or "Duplicate". Do a find on "Master Record", and then you can perform your summary (by Application) to do a count of distinct application-program pairs.
If you have access to custom functions (you need FileMaker Pro Advanced), I'd do it like this:
Add the RemoveDuplicates function as found here (this is a recursive function that takes a list of strings and returns a list of unique values).
In the relationships graph, add another occurrence of your table and add an Application = Application relationship.
Create a calculated field in the table with the calculation looking something like this:
ValueCount(RemoveDuplicates(List(TABLE2::Program)))
You'll find that each record will contain the number of distinct programs for the given application. Showing a summary for each application should be relatively trivial from here.
I think the best way to do this is to create a separate applications table. So as you've given the data, it would have two records, one for A and one for B.
So, with the addition of an Applications table and your existing table, which I'll call Objects, create a relationship from Applications to Objects (with a table occurrence called ObjectsParent) based on the ApplicationName as the match field. Create a self join relationship between Objects and itself with both Application and Program as the match fields. I'll call one of the "table occurrences" ObjectsParent and the other ObjectsChildren. Make sure that there's a primary key field in Objects that is set to auto-enter a serial number or some other method to ensure uniqueness. I'll call this ID.
So your relationship graph has three table occurrences:
Applications::Applicaiton = ObjectsParent::Application
ObjectsParent::Application = ObjectsChildren::Application, ObjectsParent::Program = ObjectsChildren::Program
Now create a calculation field in Objects, and calculating from the context of ObjectsParent, give it the following formula:
AppCount = Count( ObjectsChildren::ID )
Create a calculation field in Applications and calculating from the context of the table occurrence you used to relate it to ObjectsParent with the following formula:
AppCount = ObjectsParent::AppCount
The count field in Objects will have the same value for every object with the same application, so it doesn't matter which one you get this data from.
If you now view the data in Applications in list view, you can place the Applications::Application and Applications::AppCount fields on the layout and you should get what you've requested.