What are the current limits of the data contained on a smartsheet, and what is it based on ?
I've found some links talking about 5000 records and 500 columns, but I've reached the point with a sheet containing only 1626 rows and 123 columns.
What are the exact specifications, and is there any way to override the settings using the api ?
There are maximums on the number of rows, columns, and cells in a sheet. There is no method via the Smartsheet API to override these limits. On this Smartsheet Help Center article it currently states the maximums as the following:
5,000 Rows
200 Columns
200,000 Cells
It is possible to reach one of these maximums before reaching another. 5,000 rows at 200 columns would actually be more than 200,000 cells. Once one of these maximums is reached there may be errors and the amount of data in the sheet should be decreased.
But, these aren't absolute limits. Depending on the usage of other Smartsheet features, like Formulas, Cell Linking, and Conditional Formatting, there can be limits to the amount of data that can be stored in a sheet. There are a lot of variables that can affect the performance of a sheet and the amount of data that can be stored in it, so these limits aren't absolutely specific.
This can sometimes require a process of elimination to see what can be stored in your sheet. It is best to divide up the data into logical groupings as best you can to help keep the sheets working as you need.
Related
I need to plot trend charts on the react app based on user inputs such as timestamps, devices, etc. I have related time series data in DynamoDB and S3 (which I can query using Athena).
Returning all those millions of data points for a graph seems unreasonable and is super laggy.
I guess one option is "binning" where I decide the number of bins based on how big the time range is and take averages of the readings in that bin. However, concerned about how well it will show the drops and high we need to show them accurately.
Athena queries and DDB queries (due to the 1MB limit) - both seem fairly slow so far.
Of course the size of the response payload is another concern as API and Lambda both limit it to 10 and 6Mb respectively.
Any ideas?
I can't suggest anything smarter than "binning", but if you are concerned that the bucket interval might become too wide and performance might suffer, you can fixate the interval. Then create more than one table. For example, the interval can be 1 hour and you can have a new table for each week.
This is what we did when we had to deal with time series in dynamo. At some point, we decided to switch to Amazon Timestream
I have a bunch of data where the hours taken to process an item ranges from 3-3000 hours. most of the data is <1000 hours
I am creating a boxplot of that data. I have an large number of outliers within the data that I don't need to display, but I do need to analyse.
I have tried to use both 'scale_y_continuous(limits=c(0,1000))' and 'ylim(0,1000)' that appears to change the data that is used to create the boxplot, I altered the limits to '20' to test this theory and I get a complete plot, which can only be because the method i'm using to limit the axis also limits the range of data analysed.
I'd like to limit the y axis but not limit the range of data that is used in the analysis, what function do I use to accomplish that?
many thanks
it appears that it's 'coord_cartesian(ylim = c(nnn,nnn))+' that I needed to use.
We've reached the limit for hypercubes and need to extract more than 10000 (data points - I used this term for lack of words to describe the individual cell that the API sends over 10000 is the max when you multiply width and height of your initial fetch) using the capability API. has anyone been able to get the next page for hypercubes? note that our requirement is for mashups not extensions.
we did a work around but it required us to break our dataset and it takes a little longer.
makes you think, since Qlik is a data analytics tool there should be a way to get all of your data. in an era where we process millions if not billions of records, 10000 data points (not even records) is miniscule.
I should also volunteer that the app we are using this for is for stock analysis and they want to see trends and require to see information on individual points as tooltips. with the number of dimensions and measures we pass (total of 7 times the number of stocks - about 20 = 140) we are constricted to only 70 days (10000/140).
we are using qliksenseserver 11.24.4
Qlik Sense November 2017 Patch 2
I have a dataset that contains various wait-time metrics for all appointments in a practice for a year (check-in to call-back, call-back to check-out, etc). It contains appt time (one of about 40 15 minute slots), provider, various wait times.
I can get Tableau to show me, for each 15 minute slot, the average wait times for each provider in the practice.
What I can't seem to be able to do is also display the overall average for the practice for that given time slot so as to be able to compare that provider vs. the "office standard".
I'm super new to trying out Tableau, so I am sure it is something very simple.
Thanks in advance.
Use a level-of-detail (LOD) calculated field. An LOD calculation occurs at whatever aggregation level you specify, rather than what's on the row or column shelf.
You didn't provide any info about your data set so I will use made up names here.
This gives you the overall average wait time, regardless of other dimensions on row/column shelves:
{FIXED : avg([wait time])}
This gives you the overall average wait time per provider, regardless of other dimensions on row/column shelves:
{FIXED [Provider Name] : avg([wait time])}
See the online Tableau help at https://onlinehelp.tableau.com/current/pro/desktop/en-us/calculations_calculatedfields_lod_overview.html for more information. If you have filtering and need to calculate the overall without filters applied, look at the INCLUDE LOD keyword.
I am trying to create a graph with two lines, with two filters from the same dimension.
I have a dimension which has 20+ values. I'd like one line to show data based on just one of the selected values and the other line to show a line excluding that same value.
I've tried the following:
-Creating a duplicate/copy dimension and filtering the original one with the first, and the copy with the 2nd. When I do this, the graphic disappears.
-Creating a calculated field that tries to split the measures up. This isn't letting me track the count.
I want this on the same axis; the best I've been able to do is create two sheets, one with the first filter and one with the 2nd, and stack them in a dashboard.
My end user wants the lines in the same visual, otherwise I'd be happy with the dashboard approach. Right now, though, I'd also like to know how to do this.
It is a little hard to tell exactly what you want to achieve, but the problem with filtering is common.
The principle that is important is that Tableau will filter the whole dataset by row. So duplicating the dimension you want to filter won't help as the filter on the original dimension will also filter the corresponding rows in the second dimension. Any solution has to be clever enough to work around this issue.
One solution is to build two new dimensions that use a calculation rather than a filter to create the new result. Let's say you have a dimension, [size] that has a range of numbers from 1 to 10 and you want to compare the total number of rows including and excluding the number 5. You could create a new field using a formula like if [size] <> 5 then 1 else 0 end
Summing the new field will give a count of the number of rows that don't contain a 5 and this can be compared directly to a rowcount of the original [size] field which will give the number including the value 5.
This basic principle can be extended to much more complex logic. The essential point is to realise that filters act on every row in your data and can't, by themselves, show comparisons with alternative filter choices on a single visualisation.
Depending on the nature of your problem there may be other solutions worth looking at including sets and groups but you would need to provide more specific details for users here to tell you whether they would be useful.
We can make a a set out of the values of the dimension and then place it in the required shelf. So, you will have your dimension which will plot accordingly and set which will have data as per the requirement because with filter you can't have that independence of showing data everytime you want.