how to build domains in tableau? - jasper-reports

Domains are virtual view of a data source that presents the data in business terms, allows for localization, and provides data-level security. ADomain is a metadata layer that provides a business view of the data accessed through a data source
this is domain definition as in Jaspersoft reporting tool.
I am currently given the task to build or come with a design to create some domains in tableau.. (like in jaspersoft)
examples of domains in jaspersoft
In jaspersoft when you click on a domain : ex customer domain: you will see the list of tables (that are selected)
How to do the same in tableau?
Please guide me on this..

The meta data layers in Tableau are quite lightweight compared to other BI tools, which is ofter a plus, but seems to fight against your assigned task.
Usually the best approach in Tableau is to learn to use the tool well, build visualizations for your business domain, test and refine them, and then later focus on factoring out common meta-data as your Tableau projects grow. That tends to work better than insisting on a big meta data design up front approach.
Still as you learn to use Tableau, the features that you'll want to understand related to your question are everything about data connections (which define how to connect to a data source, and the fields, data types, calculations, groups, sets, field roles etc). You can start with a data connection in a single workbook and then save the connection separately (typically on the server) so that it can be shared among multiple workbooks (to reduce duplication). You can also use Tableau Server like a proxy for your database.
For security, you can look into the groups and access controls provided by Tableau Server, along with user filters.
Some of the features you ask about might be more properly relegated to the database server, such as by defining common views visible to different users.

Related

Should visualization tools like tableau or looker be used for multi-tenant systems?

Visualization tools like tableau, looker, apache superset are not supposed to be used for multi tenant products.
For example. A product with 1000's of users would like analytics on their data. This needs to be secure so company A cannot see other company B visualizations. For this to work these tools need to understand if a user has privileges to view the data. This is usually achieved through cookies after the user has logged in
To ensure data is only accessed by authorized users these third party tools should not be used. Instead sticking to Ruby on Rails with d3js, highcharts etc is the best options. The data can be managed a lot easier through the same authentication methods as you login and so the data is secure.
Actually, Looker handles multi-tenant data situation just fine. It is quite a common use case for Looker.
You can bind attributes to users that will force the right SQL to be written to guarantee that the user only sees appropriate data.
https://docs.looker.com/reference/explore-params/access_filter
We've got lots of customers building extranets for their businesses this way.
Disclosure: I work at looker.
The complexity of multi-tenant deployments goes far beyond the setup of some filter:
Data privacy - you are one typo away from a data privacy breach with the filters. You should use the database security and privacy capabilities to isolate your tenants.
Performance - you need to scale the underlying database to handle the load of concurrent users.
Customization - your tenants might need to load and analyze their own custom data. They need custom reports, etc.
Take a look at gooddata.com and their workspaces.
Disclosure: I work at GoodData

Publishing and changes in workbook for tableau online

I am working on an internal reporting dashboard project . There are majorly 3 roles/level to internal reporting dashboard like higher management, project management etc.
And the breakdown of information for every role/level is different as compare to other roles.
For internal reporting dashboard we have to create a database ( lets say D - SQL SERVER) whose data will be coming from 3 databases ( Lets say A,B,C) after integrating them.
For now as per my research, we can directly link database D using Tableau Live Connection in Tableau Desktop ( Professional ed) and use it to create a dashboard.
To host that workbook for users, I can use Tableau Online for publishing and to make data visible according to the roles I can use filters to restrict the data.
Now my questions are:
1. Will this workflow will be right ? Am I missing any step or process that I would need to cater.
2. How will the changes reflect in the dashboard once it is published ? Lets say if I have to add any filter/ parameter in the dashboard. Do I need to make the changes on the workbook using Tableau Desktop and automatically changes will be reflected ?
or do I have to host it again on Tableau Online ? Please educate me on this too.
Thanks for assistance I have attached a purposed workflow image too.
Regards,
Manail Pasha
WORKFLOW IMAGE
If your system is not a transactional database, I would avoid a live database connection. I would recommend a data extract that combines data blending techniques to create a data extract a.k.a .tde file.
I would publish a dashboard with user filters that enable row-level security via filters and ensure users could only see certain data.
Here is a diagram that I would follow if I were you.
To add filter/ parameter either you can do it from the desktop and publish it to Tableau online or login to online and add the filter/ parameter from the edit mode and Save it, it will get reflected if you do anyone of the above mentioned method.
If your data is frequently changing, i would recommend to go on with Live Connection, Extract refresh can be done on incremental, but the appropriate fields needs to chosen to do it( you should also consider, how to handle negated entries ). It all up-to you to decide to go on with Extract or Live

How to store large user-specific data

So I'm in the middle of planning a little web app that will require quite large amounts of data stored on a user level, in one case, the system would take a large object from a system level and make a "user specific" version, a user can have multiple ones of these. Simplest would be to compare it to a form stored in a google spreadsheet, where the user is expected to use the template spreadsheet, then change not only the answers but also the question.
Security wise I am quite OK
In the second case there is requirement to store multiple objects, size about 250k to maybe 3mb, once again on a user specific level, with a potential to move it to a system level so additional users can access it. As an example, say the user can upload pictures, but may not want to share all of them. However, a user may choose to "publish" a small number of them because they are happy with those specific pictures.
What design patterns should I consider using specifically around web apps where the user have decent amounts of data? For example, would it make most sense to use a single large database and have a table that keeps track of resources or create separate tables per user?
I have considered putting it all in a mongo database.
Your approach may be wrong.
If you want to store user based binary data and make it accessible for the user itself or the community, you would need a hierarchic structure like so:
userid1
pic1,pic2,pic3
userid2
pic4,pic5,pic6
community
pic7,pic8
You could then grant read permissions to "community" for all users, and permission for each user to its own directory.
Usually there is nothing wrong using a database to store binary files if you consider partitioning, role permissions and an applicable interface to access the data.
My suggestion is to use a binary repository like Artifactory.
It provides hierarchic structures, simple search queries using HTTP requests and has caching abilities for frequently queried objects.
I also think that http requests are a lot easier to use and also there is an abstraction layer to the data which is more secure.
Artifactory is free.

Zend Framework - How to manage subscription plan privileges

I'm developing a REST API using Zend Framework 1.12.3.
I have to implement subscriptions for different types of plans (i.e. Basic and Premium), each plan having different privileges (e.g. Premium may offer instant, daily and weekly SMS notifications, while Basic may offer only weekly SMS notifications).
Also, there may be custom plans only for certain clients.
I've added a column in the users table called subscription, but what I cannot figure out is where to save the privileges for each subscription plan.
Should I save these privileges directly into the DB (i.e. create a table called subscriptions, and another one called subscriptions_privileges having as columns subscription_id, privilege_name and privilege_value), or would it be better if I save them into the config file?
Thanks
Note: Actually this question is not linked with Zend Framework, it is system architecture question.
Short answer:
it is much more easy to hardcode your subscription plans in your source code configuration files;
it is much more flexible to store this data in database (you can create some administration panel to allow managers to manage them, track history of plan changes, use these data in analytical SQL queries). Theoretically you can deal with all this stuff through reading and writing to your config files, but databases are just the exact tool for these tasks.
P.S.: You can add separate layer of abstartion in your application. Use model objects for your subscriptions which can be populated either from database or from your hard-coded config files using different adapters.

Can I change the database server and database a report is pointing to dynamically?

I have a Crystal 2008 report that will be deployed to an InfoView server. There are four different databases the user might want to execute the report against. Each of the four databases have exactly the same schema. Only the data in each is different. Each database corresponds to a plant we have around the world.
Instead of creating four different reports (each one connected to one of the four databases), am I able to dynamically change the server/database the report hits based on a value the user enters into a parameter? I'm really trying to avoid having to create four identical reports except for the database connection on each. If this isn't possible, how do developers typically deal with this sort of scenario? I would imagine it's fairly common.
Thanks very much.
InfoView doesn't support dynamically changing a report's datasource. You certainly could modify InfoView source to suit your needs with the BusinessObjects Enterprise SDK, but that will be a challenge and won't be supported by BO.
Another option is to build a custom portal with the BusinessObjects Enterprise SDK, but this will require quite a bit of coding as well.
Probably the best option is to publish the report multiple times, set each datasource as appropriate (via the CMC), and change the name of the report to give an indication of its datasource (via the CMC). I there is a report property in the CMC that will save the datasource settings so you can quickly republish the reports if you make a change to the original.
I'm not familiar with InfoView, but it is quite common to do what you describe, I've done similar things with Asp.Net and Winforms; if you have access to the Crystal Reports object model, there are extensive options for setting logon info, I think it is SetDatabaseLogon; if you have subreports, you have to set the login for each of those separately.
The schemas do have to be completely identical, or the user will get warnings.