I have a dashboard that uses live data connection in order to extract the data, based on custom SQL with embedded parameters (The complete data is too heavy to extract).
This dashboard contains Action Filters.
When I load the dashboard, the data is being refreshed due to the live connection.
After this refresh, I want to interact with the action filters without triggering the live connection extract.
Although I choose "Pause Automatic Updates" the data source keeps being refreshed after every filter action.
Is there a way to stop those refreshes entirely while interacting with the action filters?
What do you mean by 'too heavy to extract?' will an incremental refresh alleviate some of the load on the extract?
In my experience, action filters work best with data extracted to Tableau Server.
Related
I have two screens:
Homefeed.dart
Profile.dart
On Homefeed screen all the data from various users is fetched from a server and is shown in a list of cards form.
On the Profile screen, only data that belongs to the logged in user is fetched.
The problem is that, there will be an overlap in the data that is fetched on the both the screens. For example if a user writes a post, it can show up on the Homefeed. Now if the user decides to perform any action such as like, delete, edit etc on thir post from the profile screen, then it should also update the same post that was fetched on the Homefeed screen.
Now unless user explictly refreshes the data, and send a request to server to fetch the updated data, what would be an ideal way to achieve this synchrony.
I did consider using a realtime database, but this will mean migrating current project and it might get expensive and might have problem of it own.
The other "hacky" way would be to maniuplate data somehow (I still havent figured it out) on the client side and show the update instead of fething new data from the server.
Or some other, more ideal way of achiving this, that I don't know of.
The best way is to reflect any changes of the user post i.e edit, delete in Profile.dart is by updating the database without tricking the just in the client side. Because you may reduce the database calls by tricking, but you are giving high chances of inconsistent data in database. Your database wouldn't be reliable.
Database should be the single source of truth
I would suggest, Every time HomeFeed.dart page is loaded , try loading the latest data from the database. If you are using real-time database, you dont have to check on every page load.
I am having a trouble with Power BI dataset refresh via REST API.
I get this error whenever I try to execute the dataset refresh API:
Last refresh failed: ...
There is no available gateway.
I'm testing on two accounts, and this happens only on one of them.
What is more confusing is that the storage I'm using is cloud based (Azure Data lake). So it doesn't need a gateway connection. And when I refresh the datasets manually it works.
When I get the refresh history for further investigationI get this:
serviceExceptionJson={"errorCode":"DMTS_MonikerWithUnboundDataSources"}
I could use any given help.
I have found a workaround solution, although I'm not very convinced this is the source of the problem (maybe the why is still very blurry to me)
First for more context, this is what I'm doing (using the APIs) :
1-Upload a dataset to workspace (import a pbix and skipping report, I only need the parameters and power query within it)
2- Update the parameters
3- Refresh dataset
The thing is, I have a parameter for the url of the datalake data. There are two values of the url, depending on the use case/user.
So what I did, is that I removed that parameter and made two different pbix, each one with its specific url. So then, I import a pbix depending on the use case.
I still don't understand why I get a gateway error, while I should get "error while processing data" message (seeing that it seems to be a data access issue).
I am using Tableau Desktop 8.2 and Tableau server 8.2 (Licensed versions) , the workbook created in Tableau are successfully published to Tableau server.
But when the user want to see the views or workbooks it takes a very long time to preview or open?
The Workbooks are created with Amazon RedShift Database having (>5 million records)
Could somebody guide me on this? like what is it taking a long to preview or open even after being published to Tableau server?
First question, are the views performant when opened using only Tableau Desktop? Get them working well on Desktop before introducing Server into the mix.
then look at the logs in My Tableau Repository which include query strings and timing info to see if you can narrow down the cause. You can also try the Performance Recorder feature.
A typical problem is an overly expensive query just to display a dashboard. In that case, simplify. Start with a simple high level summary viz and the introduce complexity testing the impact on performance. If one viz is too slow, there are usually alternative approaches available
Completely agree with Alex; I had a similar issue with HP Vertica. I had lot of action set on the dashboard. Considering the database structure is final, I did created the tableau extract and used the Online tableau extract in place of live connection. Vola! that solved my problem and the users are happy with the response time as well. Hope this helps you too..
Tableau provides two mode of data refreshes:
Live : Tableau will execute underlying queries every time the
dashboard is referred / refreshed. Apart from badly formulated
queries, this is one of the reason why your dashboard on Tableau Online
might take forever to load.
Extract : Query will be executed once, according to (your) specified
schedule and same data will reflect everytime the dashboard is
refreshed.
In extract mode, the time is taken only when the extract is being refreshed. However, if the extract is not refreshed periodically, the same, stale data will reflect on the dashboard. Thus, extract view is not recommended for representations of live data.
You can toggle Live <--> Extract from the Data Source pane of Tableau Desktop. (refer top right of the snapshot).
I'm developing a plugin that will pull data from a third party API. The user user inputs a number of options in a normal settings form for the plugin (used Reduz Framework - that uses WP Settings API).
The user provided options will then be used to generate a request to the third party API.
Now to my problem / question: How can I store the data that's returned from that API? Is there a built in way to do this in Wordpress - or will I have to install a database table of my own? Seems to be a bit overkill... Is there any way to "hack" in to the Settings API and set custom settings without having to display them in a form on front end?
Thank you - and happy holidays to everyone!
It sounds like what you want to do is actually just store the data from the remote API request, rather than "options". If you don't want to create a table for them, I can think of three simple approaches.
Transients API
Save the data returned from the API as transients, i.e. temporary cached data. This is generally good for data that's going to expire anyway and thus will need to be refreshed. Set an expiry time! Even if you want to hang onto the data "for ever", set an expiry time or the data will be autoloaded on every page load and thus consume memory even if you don't need them. You can then easily retrieve them with get_transient; if expired, you'll get false and that is your trigger to make your API call again.
NB: on hosts with memcached or other object caches, there's a good chance that your transients will be pushed out of the object cache sooner than you intend, thus forcing your plugin to retrieve the data again from the API. Transients really are about caching, not "data storage" per se.
Options
Save your data as custom options using add_option -- and specify autoload="no" so that they don't fill up script memory when they aren't needed! Beware the update_option will add the data with autoload="yes" if it doesn't already exist, so I recommend you delete and then add rather than update. You can then retrieve your data easily.
Custom Post Type
You can easily store your data in the wp_posts table by registering a custom post type, and then you can use wp_insert to save them and the usual WordPress post queries to retrieve them. Great for long-term data that you want to hang onto. You can make use of the post_title, post_content, post_excerpt and other standard post fields to store some of your data, and if you need more, you can add post meta fields.
We're using Crystal 11 through their webserver. When we run a report, it does the Sql query and displays the first page of the report in the Crystal web reportviewer.
When you hit the next page button, it reruns the Sql query and displays the next page.
How do we get the requerying of the data to stop?
We also have multiple people running the same reports at the same time (it is a web server after all), and we don't want to cache data between different instances of the same report, we only want to cache the data in each single instance of the report.
The reason to have pagination is not only a presentation concern. With pagination the single most important advantage is lazy loading of data - so that in theory, depending on given filters, you load only what you need.
Just imagine if you have millions of records in your db and you load all of them. First of all is gonna be a hell of a lot slower, second you're fetching a lot of stuff you don't really need. All the web models nowadays are based on lazy loading rather than bulk loading. Think about Google App Engine: you can't retrieve more than 1000 records in a given transaction from the Google Datastore - and you know that if you'll only try and display them your browser will die.
I'll close with a question - do you have a performance issue of any kind?
If so, you probably think you'll make it better but it's probably not the case, because you'll reduce the load on the server but each single query will be much more resource consuming.
If not my advice is to leave it alone! :)