Can we combine server-side pagination and client-side filtering in ag-grid? - ag-grid

I need to use server-side pagination along with client-side filtering. Wanted to know if we have any option for that using rowModelType.
I tried setting rowModelType as 'serverSide' and 'clientSide' in the places wherever applicable.But its not working.

As far as I know, it's not feasible to use server-side pagination and client-side filtering at the same time.
Frankly, I'm not sure how that would work, and if you did, you'd wind up with very high chances for encountering 'edge' conditions, like your filter filtering out all of the rows in a page returned by the server, resulting in no rows showing in the grid.
My suggestion is to do one or the other.
If you have too many rows to be able to hold them all in memory on the client side, then you'd have to do the extra work necessary to do server-side filtering along with the server-side pagination.

Related

Where should data be transformed for the database?

Where should data be transformed for the database? I believe this is called data normalization/sanitization.
In my database I have user created shops- like an Etsy. Let's say a user inputs a price for an item in their shop as "1,000.00". But my database stores the prices as an integer/pennies- "100000". Where should "1,000.00" be converted to "100000"?
These are the two ways I thought of.
In the frontend: The input data is converted from "1,000.00" to "100000" in the frontend before the HTTP request. In this case, the backend would validate that the price it is an integer.
In the backend: "1,000.00" is sent to the backend as is, then the backend validates that it is a price format, then the backend converts the price to an integer "100000" before being stored in the database.
It seems either would work but is one way better than the other or is there a standard way? I would think the second way is best to reduce code duplication since there is likely to be multiple frontends - mobile, web, etc- and one backend. But the first way also seems cleaner- just send what the api needs.
I'm working with a MERN application if that makes any difference, but I would think this would be language agnostic.
I would go with a mix between the two (which is a best practice, AFAIK).
The frontend has to do some kind of validation anyway, because it you don't want to wait for the backend to get the validation response. I would add the actual conversion here as well.
The validation code will be adapted to each frontend I guess, because each one has a different framework / language that it uses, so I don't necessarily see the code duplication here (the logic yes, but not the actual code). As a last resort, you can create a common validation library.
The backend should validate again the converted value sent by the frontends, just as a double check, and then store it in the database. You never know if the backend will be integrated with other components in the future and input sanitization is always a best practice.
Short version
Both ways would work and I think there is no standard way. I prefer formatting in the frontend.
Long version
What I do in such cases is to look at my expected business requirements and make a list of pros and cons. Here are just a few thoughts about this.
In case, you decide for doing every transformation of the price (formatting, normalization, sanitization) in the frontend, your backend will stay smaller and you will have less endpoints or endpoints with less options. Depending on the frontend, you can choose the perfect fit for you end user. The amount of code which is delivered will stay smaller, because the application can be cached and makes all the formatting stuff.
If you implement everything in the backend, you have full control about which format is delivered to your users. Especially when dealing with a lot of different languages, it could be helpful to get the correct display value directly from the server.
Furthermore, it can be helpful to take a look at some different APIs of well-known providers and how these handle prices.
The Paypal API uses an amount object to transfer prices as decimals together with a currency code.
"amount": {
"value": "9.87",
"currency": "USD"
}
It's up to you how to handle it in the frontend. Here is a link with an example request from the docs:
https://developer.paypal.com/docs/api/payments.payouts-batch/v1#payouts_post
Stripe uses a slightly different model.
{
unit_amount: 1600,
currency: 'usd',
}
It has integer values in the base unit of the currency as the amount and a currency code to describe prices. Here are two examples to make it more clear:
https://stripe.com/docs/api/prices/create?lang=node
https://stripe.com/docs/checkout/integration-builder
In both cases, the normalization and sanitization has to be done before making requests. The response will also need formatting before showing it to the user. Of course, most of these requests are done by backend code. But if you look at the prebuilt checkout pages from Stripe or Paypal, these are also using normalized and sanitized values for their frontend integrations: https://developer.paypal.com/docs/business/checkout/configure-payments/single-page-app
Conclusion/My opinion
I would always prefer keeping the backend as simple as possible for security reasons. Less code (aka endpoints) means a smaller attack surface. More configuration possibilities means a lot more effort to make the application secure. Furthermore, you could write another (micro)service which overtakes some transformation tasks, if you have a business requirement to deliver everything perfectly formatted from the backend. Example use cases may be if you have a preference for backend code over frontend code (think about your/your team's skills), if you want to deploy a lot of different frontends and want to make sure that they all use a single source of truth for their display values or maybe if you need to fulfill regulatory requirements to know exactly what is delivered to your user.
Thank you for your question and I hope I have given you some guidance for your own decision. In the end, you will always wish you had chosen a different approach anyway... ;)
For sanitisation, it has to be on the back end for security reasons. Sanitisation is concerned with ensuring only certain field's values from your web form are even entertained. It's like a guest list at an exclusive club. It's concerned not just with the field (e.g. username) but also the value (e.g. bobbycat, or ); DROP TABLE users;). So it's about ensuring security of your database.
Normalisation, on the other hand, is concerned with transforming the data before storing them in the database. It's the example you brought up: 1,000 to 1000 because you are storing it as integers without decimals in the database.
Where does this code belong? I think there's no clear winner because it depends on your use case.
If it's a simple matter like making sure the value is an integer and not a string, you should offload that to the web form (I.e. the front end), since forms already have a "type" attribute to enforce these.
But imagine a more complicated scenario. Let's say you're building an app that allows users to construct a Facebook ads campaign (your app being the third party developer app, like Smartly.io). That means there will be something like 30 form fields that must be filled out before the user hits "create campaign". And the value in some form fields affect the validity of other parts of the form.
In such a situation, it might make sense to put at least some of the validation in the back end because there is a series of operations your back end needs to run (like create the Post, then create the Ad) in order to ensure validity. It wouldn't make sense for you to code those validations and normalisations in the front end.
So in short, it's a balance you'll need to strike. Offload the easy stuff to the front end, leveraging web APIs and form validations. Leave the more complex normalisation steps to the back end.
On a related note, there's a broader concept of ETL (extract, transform, load) that you'd use if you were trying to consume data from another service and then transforming it to fit the way you store info in your own database. In that case, it's usually a good idea to keep it as a repository on its own - using something like Apache Airflow to manage and visualise the cron jobs.

Is both client and server column filters in ag grid possible?

I want to know if we can do combination filtering in ag grid. Some volumn filtering on client and some on server. is that possible?
I was checking adaptabletools website they have built similar feature with serverOptions.link below. I was trying to achieve similar thing via ag-grid api. Can you please advise
https://api.adaptabletools.com/interfaces/_src_adaptableoptions_searchoptions_.searchoptions.html
Update on this question as I developed AdapTable which the OP refers to in her question.
We DO enable and facilitate server-side searching, sorting and filtering while keeping ag-Grid in ClientSideRowModel mode and many of users take huge advantage of it.
You can learn more at:
https://docs.adaptabletools.com/docs/key-topics/server-functionality
However note that this is suited to the use case where you have a few hundred thousand rows and want to have the best of both worlds; if you have millions of rows of data that needs searching and filtering then you should use ag-Grid's Server or Infinite Row Models (both of which AdapTable fully supports but in different ways to that mentioned in the OP).
Well Client-Side RowModel is the default. The grid will load all of the data into the grid in one go. The grid can then perform filtering, sorting, grouping, pivoting and aggregation all in memory.
The Server-Side Row Model builds on the Infinite Row Model. In addition to lazy-loading the data as the user scrolls down, it also allows lazy-loading of grouped data with server-side grouping and aggregation. Advanced users will use Server-Side Row Model to do ad-hoc slice and dice of data with server-side aggregations.
Ideally developer should choose either of them. Also AG Grid doesn't allow any method to set RowmodelType type programmatically.
So the simple answer is no it can't be done easily.
But I think you can do some workaround by creating another hidden AG Grid which will be created with RowmodelType = 'client side'. update the data in this 2nd grid whenever data changes in the first grid. also switch between grid(using hide show logic) when user wants to filter on client side(may be you can provide a radio button for that) and you can set filterstate/columnstate etc.. settings from 2nd to 1st grid.

free-jqGrid External Filtering Used With Grid's beforeRequest() or onPaging() Event

Using jqGrid free (version 4.15.6) to show very basic information about invoices (ie: date created, date due, client, total, status). The invoices grid only has a few pertinent columns that are displayed because it is just not needed to show more than that. In reality there are a lot of other invoice-related fields that are not shown. I would like to offer end-users the ability to filter the grid based on a lot of these other parameters that are simply not part of the grid contents.
I know jqGrid offers built-in searching, and you can easily just add hidden columns with all the data, but I feel this is not good for us--invoices contain a lot of data--data that is not necessarily present in just the invoices database table. We want the grid to provide many other filtering options outside of the base invoice data but we do NOT want to use the built-in filter options. Instead, I would like to use a separate HTML table with a bunch of search fields that our server-side code would know how to pull back). When one decides to invoke the external filter, we want the grid to load all invoices matching that combined filter. And if one chooses to navigate using the grid's paging buttons, we want the grid to continue using the original external filtering parameters.
Hope this makes sense. Maybe I am just overthinking this but I am fairly certain the grid is designed to use it's built in filtering/searching tools/dialog and I have not found anyway to override this behavior. Actually I have using an older jqGrid but that involved using jQuery to completely REPLACE the default pager with custom HTML and event handling. I never could figure this out with older jqGrid so I chose to write it myself. But that code is less than optimum and even I know it is subject to much criticism. Having upgraded to 4.15.6, I want to do this the best way and I want to keep it logical and practical.
I have tried using beforeRequest() and onPaging() events to change the 'url' parameter, thinking that if I modified the url, I could change the GET to include all of our custom filtering fields. It seems that does not work as the url NEVER changes from the originally defined value. Console logging does show the events firing but no change to url. On top of that, the grid ALWAYS passes its own page field, _search field, etc. to the server so the server NEVER sees the filter request.
How does one define their own custom filtering coupled with paging loader and still take advantage of the built-in paging events? What am I missing?
**** DELETED CODE THAT WAS ADDED TO QUESTION THAT DID NOT PERTAIN TO ORIGINAL QUESTION ISSUE *********
It's difficult to answer on your question because you didn't posted code fragments, which shows how you use jqGrid and because the total number of data, which could be needed to display in all pages isn't known.
In general there are two main alternatives implementing of custom filtering:
server side filtering
client side filtering
One can additionally use a mix from both filtering. For example, one can load from the server all invoices based on some fixed filters (all invoices of specific user or all invoices of one organization, all invoices of the last month) and then use loadonce: true, forceClientSorting: true options to sort and to filter the returned data on the client side. The user could additionally to filter the subset of data locally using filter toolbar of searching dialog.
The performance of client side is essentially improved last years and loading relatively large JSON data from the server could be done very quickly. Because of that Client-Side-Filtering is strictly recommended. For better understanding the performance of local sorting, filtering and paging I'd recommend you to try the functionality on the demo. You will see that the timing of local filtering of the grid with 5000 rows and 13 columns is better as you can expect mostly from the round trip to the server and processing of server side filtering on some very good organized database. It's the reason why I recommend to consider to use client side sorting (or loadonce: true, forceClientSorting: true options) as far it's possible.
If you need to filter data on the server then you need just send additional parameters to the server on every request. One can do that by including additional parameters in postData. See the old answer for additional details. Alternatively one can use serializeGridData to extend/modify the data, which will be set to the server.
After the data are loaded from the server, it could be sorted and filtered locally before the first page of data will be displayed in the grid. To force local filtering one need just add forceClientSorting: true additionally to well known loadonce: true parameter. It force applying local logic on the data returned from the server. Thus one can use postData.filters, search: true to force additional local filtering and sortname and sortorder parameter to force local sorting.
One more important remark about using hidden columns. Every hidden column will force creating DOM elements, which represent unneeded <td> elements. The more DOM elements you place on the page the more slow will be the page. If local data will be used (or if loadonce: true be used) then jqGrid hold data associated with every row twice: once as JavaScript object and once as cells in the grid (<td> elements). Free jqGrid allows to use "additional properties" instead of hidden columns. In the case no data will be placed in DOM of the grid, but the data will be hold in JavaScript objects and one able to sort or filter by additional properties in the same way like with other columns. In the simplest way one can remove all hidden columns and to add additionalProperties parameter, which should be array of strings with the name of additional properties. Instead of strings elements of additionalProperties could be objects of the same structures like colModel. For example, additionalProperties: [{ name: "taskId", sorttype: "integer"}, "isFinal"]. See the demo as an example. The input data of the grid can be seen here. Another demo shows that searching dialog contains additional properties additionally to jqGrid column. The commented part columns of searching shows more advanced way to specify the list and the order of columns and additional properties displayed in searching dialog.
Forgive my answering like this but this question started out on one subject related to filtering and paging but with using an external filtering source. Oleg actually has several demos over many threads that I was able to use to accomplish the custom filtering and maintain default built-in paging. So his answer will be the accepted answer for the original question topic.
But in the solution of original, I encountered another issue with loading the grid initially. I wanted to have the grid load with default filtering values should no other filter already be in place. That really should have been a different question because it really did not affect the first.
I found yet another Oleg reply on a completely different question:
jqGrid - how to set grid to NOT load any data initially?.
Oleg answered that question and that answer solved our second need to load one way, then allow another way.
So, on initial load, we look for the filter params server-side. None given? We pull records using default filtering. Params present? We use initial provided params. The difference with initial loading we do not AJAX exit. We instead json_encode the data and place it in the grid definition as follows:
$('#grd_invoices').jqGrid(
...
url: '{$modulelink}&sm=130',
data: {$json_encoded_griddata},
datatype: 'local',
...
});
Since the datatype is set to 'local', the grid does NOT go to server initially, so the data parameter is used by the grid. Once we are ready to filter, we use Oleg's solution from yet another answer on yet another question to dynamically apply the filter as follows:
var myfilter = { groupOp: 'AND', rules: []};
myfilter.rules.push({field:'fuserid',op:'eq',data:$('#fuserid').val()});
myfilter.rules.push({field:'finvoicenum',op:'eq',data:$('#finvoicenum').val()});
myfilter.rules.push({field:'fdatefield',op:'eq',data:$('#fdatefield').val()});
myfilter.rules.push({field:'fsdate',op:'eq',data:$('#fsdate').val()});
myfilter.rules.push({field:'fedate',op:'eq',data:$('#fedate').val()});
myfilter.rules.push({field:'fwithin',op:'eq',data:$('#fwithin').val()});
myfilter.rules.push({field:'fnotes',op:'eq',data:$('#fnotes').val()});
myfilter.rules.push({field:'fdescription',op:'eq',data:$('#fdescription').val()});
myfilter.rules.push({field:'fpaymentmethod',op:'eq',data:$('#fpaymentmethod').val()});
myfilter.rules.push({field:'fstatus',op:'eq',data:$('#fstatus').val()});
myfilter.rules.push({field:'ftotalfrom',op:'eq',data:$('#ftotalfrom').val()});
myfilter.rules.push({field:'ftotal',op:'eq',data:$('#ftotal').val()});
myfilter.rules.push({field:'fmake',op:'eq',data:$('#fmake').val()});
myfilter.rules.push({field:'fmodel',op:'eq',data:$('#fmodel').val()});
myfilter.rules.push({field:'fserial',op:'eq',data:$('#fserial').val()});
myfilter.rules.push({field:'fitemid',op:'eq',data:$('#fitemid').val()});
myfilter.rules.push({field:'ftaxid',op:'eq',data:$('#ftaxid').val()});
myfilter.rules.push({field:'fsalesrepid',op:'eq',data:$('#fsalesrepid').val()});
var grid = $('#grd_invoices');
grid[0].p.search = myfilter.rules.length>0;
$.extend(grid[0].p.postData,{filters:JSON.stringify(myfilter)});
$('#grd_invoices').jqGrid('setGridParam',{datatype:'json'}).trigger('reloadGrid',[{page:1}]);
This allows us to have the grid show initial data loaded locally, and then subsequent filtering changes the grid datatype to 'json', which forces the grid to go to server with new filter params where it loads the more specific filtering.
Credit goes to Oleg because I used many of his posts from many questions to reach the end result. Thank you #Oleg!

Conducting searches with REST that return large datasets?

I'm creating a RESTful WebAPI for our system in .Net, when conducting a search in my client I presume that it should hit the /person route passing parameters when required to filter the data. However, the person object that is return has quite a lot of nested objects which could slow down data retrieval. Should I have a separate controller which returns a more skeletonised view of a person, should I continue the way I am going, or should I be making subsequent requests to break down the person?
Actually, there is no silver-bullet way to solve your problem, but there are several approaches, which could be usefull for you. However, in my opinion, your idea about optimizing the size of resource representation in search results is correct.
You can include the list of requested fields in filtering query. (for example, see the similar signature/approach in ES search API). Many search engines are following this approach to reduce redundant response payload.
As you have metioned, you can break your heavy object in sub-resources, so that you would be able to include only links to nested objects inside the person, without including the whole represantations of inner-objects. The HATEOAS approach will fit perfectly for this purpose, but it will add extra complexity to your application (but the extra flexibility too).
However, you have to choose, which approach is better for your particular application, but I think, that a good starting point will be the approach with list of requested fields.

Pagination GWT DataGrid for Large Data

I have a case where my DataGrid might contain thousands of rows of data. My page size is only 50. So I want to get only so much data onto the client from server and load rows of data as and when required. Is there a default support from GWT to do that?
I tried using PageSizePager but then realized that the data is already sent to the client and that defeats what am trying to achieve.
Thanks.
Indeed. The aim (among others) of DataGrid, as well as of any other cell widget, is to display large data sets as fast as possible by using pagination and cells.
I suggest you to start with the official docs about CellTable (same applies to DataGrid), and pagination/data retrieval.
You'll probably need an AsyncDataProvider to asynchronously fetch data and an AbstractPager (say, SimplePager) to force retrieval of more data.
You can also decide to extend the current range of rows (say, infinite scroll), instead having multiple pages (by using KeyboardPagingPolicy.INCREASE_RANGE).
In general, the question is too general and can be hardly answered in few lines (or without specify more). The bits are all in the links above, and don't forget to have a look at the cell widgets samples in the showcase, as they cover almost everything you will need.
Hope to get you started.