Exporting specifyied records from portal - filemaker

Hi does anyone know if it's possible to export specified records from a portal in XML? Currently when I filter the portal it exports all records in the relationship and ignores the portal filter. Is it possible to specify which records to export from a portal without modifying the relationship?
Thanks for any help.

Is it possible to specify which records to export from a portal
without modifying the relationship?
Not really. Well, at least in theory, you could go to the related records and perform a Constrain Found Set there to replicate the filter's action. But then you would have to implement the same logic twice, violating the DRY principle.
If you need the filtered results at the data layer (e.g. for export), then it's time to filter the relationship. Portal filtering is meant for display purposes only.
Note: this is assuming that you actually need this filtering for display purposes, too.

Related

how to quickly locate which sheets/dashboards contain a field?

I am creating a data dictionary and I am supposed to track the location of any used field in a workbook. For example (superstore sample data), I need to specify which sheets/dashboards have the [sub-category] field.
My dataset has hundreds of measures/dimensions/calc fields, so it's incredibly time exhaustive to click into every single sheet/dashboard just to see if a field exists in there, so is there a quicker way to do this?
One robust, but not free, approach is to use Tableau's Data Catalog which is part of the Tableau Server Data Management Add-On
Another option is to build your own cross reference - You could start with Chris Gerrard's ruby libraries described in the article http://tableaufriction.blogspot.com/2018/09/documenting-dashboards-and-their.html

Best practices for parameterizing load of multiple CSV files in Data Factory

I am experimenting with Azure Data Factory to replace some other data-load solutions we currently have, and I'm struggling with finding the best way to organize and parameterize the pipelines to provide the scalability we need.
Our typical pattern is that we build an integration for a particular Platform. This "integration" is essentially the mapping and transform of fields from their data files (CSVs) into our Stage1 SQL database, and by the time the data lands in there, the data types should be set properly and the indexes set.
Within each Platform, we have Customers. Each Customer has their own set of data files that get processed in that Customer context -- within the scope of a Platform, all Customer files follow the same schema (or close to it), but they all get sent to us separately. If you looked at our incoming file store, it might look like (simplified, there are 20-30 source datasets per customer depending on platform):
Platform
Customer A
Employees.csv
PayPeriods.csv
etc
Customer B
Employees.csv
PayPeriods.csv
etc
Each customer lands in their own SQL schema. So after processing the above, I should have CustomerA.Employees and CustomerB.Employees tables. (This allows a little bit of schema drift between customers, which does happen on some platforms. We handle it later in our stage 2 ETL process.)
What I'm trying to figure out is:
What is the best way to setup ADF so I can effectively manage one set of mappings per platform, and automatically accommodate any new customers we add to that platform without having to change the pipeline/flow?
My current thinking is to have one pipeline per platform, and one dataflow per file per platform. The pipeline has a variable, "schemaname", which is set using the path of the file that triggered it (e.g. "CustomerA"). Then, depending on file name, there is a branching conditional that will fire the right dataflow. E.g. if it's "employees.csv" it runs one dataflow, if it's "payperiods.csv" it loads a different dataflow. Also, they'd all be using the same generic target sink datasource, the table name being parameterized and those parameters being set in the pipeline using the schema variable and the filename from the conditional branch.
Are there any pitfalls to setting it up this way? Am I thinking about this correctly?
This sounds solid. Just be aware that you if you define column-specific mappings with expressions that expect those columns to be present, you may have data flow execution failures if those columns are not present in your customer source files.
The ways to protect against that in ADF Data Flow is to use column patterns. This will allow you to define mappings that are generic and more flexible.

Create price rules in WCS using OOB commands

Currently I have a pricerule that has only one action element to fetch a price from pricelist.
In order to achieve this pricerule I'm adding entries into required tables like
PRICERULE,
PRELEMENT,
PRELEMENTATTR
and other tables.
Now I need to add more conditions and branches to this pricerule in order to fit for the requirements(something like this).
But I found forming this pricerule by inserting entries directly into tables (as I did for simple pricerule) is quite complex. Because after forming the pricerule it has to be updated on weekly basis. Updates will be like changing the markup/markdown percentage or changing the start and end date of this markup etc.
So my question is:
Instead of directly updating the tables, is there any IBM WCS OOB functionality to achieve this?

Tableau performance

I've a problem with the dashboard in Tableau. In the dashboard there are many worksheets, and all the columns that are in the report are calculable. The problem is that dashboard is being formed for a very long time. The report contains approximately 2 million rows. And it is generated about 5 minutes.
Tell me, what are the solutions in this case?
Maybe I can somehow adjust the page display and not all the records at once?
To reduce the calculation time, try to exclude data you don't need with a data source filter in tableau. You can also hide or delete unused calculated fields. Other things you can do is reduce sheets that are not used.
Here's a link: https://www.tableau.com/about/blog/2016/1/5-tips-make-your-dashboards-more-performant-48574
Steps to follow to reduce calculation time:
Extract the data and use Extract data and also keep option as extract instead of live.Also replace the data source using extract data.
Use "User Filter" to reduce calculation time so that tableau will display of particular user data only.
I hope this will work to solve your problems.
I have one more idea to resolve this issue.
1)when you loan first time your dashboard put into Dashboard Action Filter
First Time load dashboard data exclude in your sheet.
Dashboard Menu->Action->add action->select sheet and exclude option.
2) Live to Extract data source and select radio button extract.
3)use user filter.
I am following the other answers (use extract, dashboard action filter...) and I want to add one point:
Drag every field used by any tablesheet on the dashboard on "Detail" of every tablesheet you are using on the Dashboard. Now Tableau loads all needed data while loading the first tablesheet and can use this data for the other sheets.
i.e. A dashboard contains three tablesheets (A, B, C) now you drag every field used by A on "Deatil" of B and C, every field used by B on "Deatil" of A and C, every field used by C on "Deatil" of B and A.
We are also having a similar issue with 150 million rows but I want to check if you are doing following steps. This may help you. This goes back to fundamentals of Tableau reporting.
1/ Try to make sure your data set is in star schema format. This will help a lot in report.
2/ Try to have tables and views in DB in such a way that same columns are used in Tableau. Any extra columns in tables adds to the performance issue.
3/Make sure indexing is done properly for all the fields that are joined.
4/ In my experience Dashboard adds extra performance lag. So make sure you try to get as much performance tuning on sheets as possible before even going to dashboard.
5/ If required try to use materialized views.
hope this helps.
Try to capture performance metrics using performance recorder option in Tableau.
Check for the underlying DB tables and joins present on the data source layer.
Try using optimized sets and parameters as required and get rid of less relevant filters.
Try using data extracts with scheduled refresh with data source filter for limited business years data.

Reporting within CQRS

I'm trying understand CQRS to see if it can help out in an reporting environment.
Problem: An CQRS designed system is already in production, happily generating commands, events and updating the necessary query views. A new report is required. This report takes a number of parameters; Start Date, End Date, Product Type, and Product Category.
How do I generate the aggregate views for:
A query store that will initially be empty
And, can pass parameters with very different values
Do I try and solve this using a CQRS approach, or is there a better alternative?
Thanks
If it is not reasonable to precompute all your report data into flat view, then just don't do that. You may want to join a bunch of tables for your report. It's your decision what can be precomputed, and what is not worth it (cpu, storage considerations).
In your particular case (StartDate, EndDate,..) - i can't see what is the problem to generate a single ViewModel table for it, and just query directly against the parameters.
Figure out which events are required to gather all report data.
Query all those events, republish them to the endpoint that handles updating the new report table(s).
Wait until all events have been processed.
Put some indexes on the columns that will function as report query criteria.
Done!