Extjs 4.2 Grid is not displaying all the records - extjs4.2

I have grid with around 1500 records and also have the pagination with 50 records per page.But whenever I increase the page size to 1000,it is not showing up all the records.However my json call return all the 1000 records
Why the grid is not showing all the records.
Any help would be highly appreciated
Thanks,

The problem is I set the ajax time out to 2 minutes. However it was taking more than two minutes .Seems there exist performance issue. I need to correct the logic in such a way that it should pull all the records in minimum time.

Related

Line Chart Data handle issue

what is the best way to handle Line Chart data to send in API?
we have chart like this It has Hours, Days, Weeks, Months Data so how Will I manage with easy way?
I tired normal X,Y values but there is bulk data in every category so it is hard to handle.
my question about what data should I get from the server so I will show these bulk data easily.
As i can figure out, you are facing data processing issue while converting all seconds data to Hour, Month and other required formats every second.
If this is the issue then you can follow these steps to overcome processing data every second.
Process all data only once when you receive data for the first time
Next second when you receive data, do not process all data. Just get latest data and show in Heart Rate.
You can process all data again after sometime, based on your requirements. (this can take time)
Because as per my knowledge you are receiving all data every second, but your chart's minimum time parameter is in hours, so you have to process same data every second, which can consume little more time and results in delay.
If this is not an exact issue you are facing then please update your question with exact issue you are facing.

PostgreSQL delete and aggregate data periodically

I'm developing a sensor monitoring application using Thingsboard CE and PostgreSQL.
Contex:
We collect data every second, such that we can have a real time view of the sensors measurements.
This however is very exhaustive on storage and does not constitute a requirement other than enabling real time monitoring. For example there is no need to check measurements made last week with such granularity (1 sec intervals), hence no need to keep such large volumes of data occupying resources. The average value for every 5 minutes would be perfectly fine when consulting the history for values from previous days.
Question:
This poses the question on how to delete existing rows from the database while aggregating the data being deleted and inserting a new row that would average the deleted data for a given interval. For example I would like to keep raw data (measurements every second) for the present day and aggregated data (average every 5 minutes) for the present month, etc.
What would be the best course of action to tackle this problem?
I checked to see if PostgreSQL had anything resembling this functionality but didn't find anything. My main ideia is to use a cron job to periodically perform the aggregations/deletions from raw data to aggregated data. Can anyone think of a better option? I very much welcome any suggestions and input.

Staggered API call in Power BI

I'm fairly new to Power BI so forgive my ignorance if this is a stupid question. I've set up an API query to pull records from a 3rd party app, but the rate limiting of the app allows max 500 records per refresh, with some timing restrictions.
How can I set up my query to stagger the refresh, starting where it left off every time. So for example if there are 2000 records to pull, I'd want to stagger that to 500 (new) records pulled every minute until complete. I considered using incremental refresh, but there are no variables that group the data into small enough chunks.
Help? Can I provide anything else?

prometheus aggregate table data from offset; ie pull historical data from 2 weeks ago to present

so i am constructing a table within grafana with prometheus as a data source. right now, my queries are set to instant, and thus it's showing scrape data from the instant that the query is made (in my case, shows data from the past 2 days)
however, i want to see data from the past 14 days. i know that you can adjust time shift in grafana as well as use the offset <timerange> command to shift the moment when the query is run, however these only adjust query execution points.
using a range vector such as go_info[10h] does indeed go back that range, however the scrapes are done in 15s intervals and as such produce duplicate data in addition to producing query results for a query done in that instant
(and not an offset timepoint), which I don't want
I am wondering if there's a way to gather data from two weeks ago until today, essentially aggregating data from multiple offset time points.
i've tried writing multiple queries on my table to perform this,
e.g:
go_info offset 2d
go_info offset 3d
and so on..
however this doesn't seem very efficient and the values from each query end up in different columns (a problem i could probably alleviate with an alteration to the query, however that doesn't solve the issue of complexity in queries)
is there a more efficient, simpler way to do this? i understand that the latest version of Prometheus offers subquerys as a feature, but i am currently not able to upgrade Prometheus (at least in a simple manner with the way it's currently set up) and am also not sure it would solve my problem. if it is indeed the answer to my question, it'll be worth the upgrade. i just haven't had the environment to test it out
thanks for any help anyone can provide :)
figured it out;
it's not pretty but i had to use offset <#>d for each query in a single metric.
e.g.:
something_metric offset 1d
something_metric offset 2d

Algolia Record count surge

When uploaded records from Admin Panel today and found a rare increase in the stats which cant able to understand. Have uploaded json with 113 records but in stats it displays as 298,000 !!! Cant able to understand why it is happening. Is it a problem in Algolia and is it advisable to trust Algolia Billing, as we cant able to find any Ticket support for Algolia too.
The information about the number of records should be good, if you click on the day view on the up right corner:
You should see a spike in the number of records. You can also check the number of operation done during the same period of time in the graph below, it should be higher to this number of record.
Best regards
oops problem is CSV file which was uploaded. Microsoft Excel generated CSV file with empty comma for more than a million lines (just comma alone without data). And when uploaded it seems Algolia confused and billed.