Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Within Delphi Seattle, I am using the Delphi Rest components to retrieve data via REST services. My data provider appears to limit results to 1000 rows at a time, meaning I need to use pagination. I know a pagination URL is returned in the REST data stream. So a couple questions...
(1) Do the Delphi components support a GetNextPage (or something similar?). If so, I could not find it.
(2) How do I retrieve the URL to get the next page? Do I then update the TRESTRequest resource property and EXECUTE again?
(3). I am using a RestResponseDataSetAdapter to access this data (via DataSource and ClientDataSet). I am assuming that there is NO WAY to "combine" the data results from multiple REST calls. For example, if I retrieve 1,000 rows via my first call, and 300 rows via the second call, there is no way to access all 1300 rows at the same time?
I have looked on Google, as well as REST documentation and did not find anything useful. Any help appreciated.
There is no single standard way to implement pagination, as different Web/REST servers implement it in their own way. It's next to impossible for these components to have built-in pagination options covering any and every possible scenario.
Whatever service you're using should provide you details of how to implement pagination. Usually, this is part of the query string. For example...
http://someserver.com/someresource?pageSize=100&page=1
...or sometimes perhaps in the resource...
http://someserver.com/someresource/1/
...or sometimes in the HTTP headers...
Page-Size: 100
Page: 1
I've also seen some servers which provide a URL in their response, pre-defined and ready for you to use to navigate to the next page of results...
{
"next_page": "http://someserver.com/someresource?pageSize=100&page=3",
"prev_page": "http://someserver.com/someresource?pageSize=100&page=1"
}
But again, every server is different. I've never seen any two REST servers which follow the exact same rules as each other.
You will just have to read the rules as instructed by this service, and implement your pagination in each and every request, as you need.
That being said, whenever I write any sort of API wrapper, the first step is to establish a standard communication layer, which implements anything which is common across all requests available on that particular service. Here, I would add pagination options, working according to how that service was designed.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a complexe request to send to the server . In sumary i am creating a feed system
So in my request i use 2 tables.
First i start with the login user id and i pull all the other users he is following from a FOLLOW table .
So now I have the logged in user plus an array of other user he is following .
Second step is i use a FEED table the complexity is i would like to pull all the action from this table that are eitheir performed by the main user or the following users.
I am using Graphql for all my other request ... however for a complxe request like this one . I am thinking that REST is more suited
I would like to know your thoughts
There's no such term as better. It all depends on what you need, what your architecture is and after all, what you know to use better.
GraphQL is great for such complex request because you can return exactly what you need and nothing more. So if you're asking if GraphQL can handle it, for sure it can!
Where is this complexity?
You can use one graphql query - user{followers{feeds{action.. and user{feeds{action... - both action arrays will be available in Apollo.
You can always combine results from these 2 arrays into one on client side from [normalized] Apollo cache [for some component needs]. You have both sets separated as they are separated in reality and universal for future needs/other app/client/admin.
If you really want/need it combined serverside just add user to his followers in resolver for query like user{userAndFollowers{feeds/action... - it can be done beside main/separated schema, just by adding additional 'branch'.
It always depends on details ... but REST better? in witch version/convention/'standard'? good joke ;) - no offence, tons of pro/cons/comparisions everywhere ... try/read/choose suitable to requirements.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am trying to design a RESTful web API for mostly CRUD operations.
I have a design dilemma on how to model a save action for an entity which also can have optional side effects like updating other "child" entities that were not part of the original entity.
Example:
Template Entity
Child documents Entity
Template can have multiple child documents
If a template is updated, all or some of the children based on the entity can be updated.
GET /templates/{id} -> Returns template
POST /templates/ -> Creates template
PUT /templates/ -> Updates template
Now if we want to update template and also instruct the server to update all documents based on the template, what would be a good design?
1)
PUT /templates/
{
template: {
..
},
childDocumentsIds: [1, 3, 7...]
}
2)
PUT /templates?childDocumentIds=1,3,7
{
template
}
Similar questions has already been asked, but they do not quite answer my question:
How to design REST API for non-CRUD "commands" like activate and deactivate of a resource?
What RESTful HTTP request for executing actions on the server?
How to Route non-CRUD actions in a RESTful ASP.NET Web API?
I am trying to judge if other people have similar questions when designing REST APIs. Also lately after experience with few of them, I think we can do better than REST APIs.
I think we can do better than REST APIs.
The REST architectural constraints are designed with a particular problem in mind: "long-lived network-based applications that span multiple organizations." The reference application for REST is the world wide web. If that's not the kind of problem you have, then REST may not be the right fit.
HTTP is an application, whose application domain is the transfer of documents over a network. If you can frame your problem as document transfer over a network, then a whole bunch of the work has already been done for you, and you can leverage it if you are willing to conform to its constraints.
The remote authoring idioms in HTTP (primarily GET/PUT) are very crud like - please give me your latest representation of some document; here is my latest representation of some document, please make your copy look like mine. Our API is a facade -- we pretend to be a dumb document store that understands GET and PUT semantics, but behind the scenes we do useful work.
So we might have, for example, a simple todo list. At the beginning, it is empty
GET /todoList
200 OK
[]
And if we wanted to send an email to Bob, we would first edit our local copy of the document.
["Send an email to bob#example.org"]
And then we would ask the server to make its copy of the document look like our copy
PUT /todoList
["Send an email to bob#example.org"]
HTTP semantics tell the server how to interpret this message, but it gets to choose for itself what to do with it. The server might, for example, update it's own local copy of /todoList, send the email to Bob, update its representation of /recentlySentEmail, update its representation of /recentlySentEmailsToBob, and so on.
The response from the server takes a number of standard forms; 202 Accepted -- I understood your request, and I may do it later; 204 -- No Content -- I edited my copy of the document to match yours, here's some meta data; 200 OK -- I've made changes to my representation of the document, here they are (or alternatively, I've made changes to my copy of the document, you can ask me for a refreshed copy).
if we want to update template and also instruct the server to update all documents based on the template, what would be a good design?
The most straight forward example would be to just send the revised template, and allow the server to update other resources as it sees fit
GET /template
200 ....
[original representation of template]
// make edits
PUT /template
[revised representation of template]
200 OK
If the server knows which documents need to be updated, it can just update them. Ta-Da.
If the client needs to know which resources have been updated, just send that list back
PUT /template
[revised representation of template]
200 OK
[URI of resources changed by the template]
It can be a useful design exercise to work through how you might achieve the result using a web site. How might it go. You would GET a resource that includes a form; the form might include a text area with some string representation of a template. You would replace the representation in the form with the one you wanted, and submit the form, carrying the template to the server. It would make changes, then give you back a new form, with check boxes for the different resources that will be affected by the change, allowing you to perhaps change the default selections. You would submit that form, and then the server could make the appropriate changes.
That, right there, is REST -- because you are using standardized media types, general purpose components (like browsers) can do all of the HTTP and HTML book keeping. The browser knows how forms work, it knows how to take the form processing rules and meta data to create the appropriate requests. The web caches all know which representations can be stored, and which should be invalidated.
What are the options in a web api to indicate that the returned data is paged and there is further data available.
ASP.Net Web API with OData uses a syntax similar to the following:
{
"odata.metadata":"http://myapi.com/api/$metadata#MyResource","value":[
{
"ID":1,"Name":"foo"
},
...
{
"ID":100,"Name":"bar"
}
],"odata.nextLink":"http://myapi.com/api/MyResource?$skip=20"
}
Are there any other ways to indicate the link to the next/previous 'page' of data without using a metadata wrapper around the results. Can this be achieved by using custom response headers instead?
Let's take a step back and think about WebAPI. WebAPI in essence is a raw data delivery mechanism. It's great for making an API and it elevates separation of concerns to a pretty good height (specifically eliminating UI concerns).
Using Web API, however, doesn't really change core of the issue you are facing. You're asking "how do I want to query my data store in an performant manner and return the data to the client efficiently?" Your decisions here really parallel the same question when building a more traditional web app.
As you noted, oData is one method to return this information. The benefit here is it's well known and well defined. The body of questions/blogs/articles on the topic is growing rapidly. The wrapper doesn't add any meaningful overhead.
Yet, oData is by no means the only way you can do this. We've had to cope with this since software has been displaying search results. It's tough to give you specific advice without really understanding your scenario. Here are some questions that bubbled up as I read your question :
Are your results sets huge but users only see the first one or two
pages?
Or do user tend to page through all of the results?
Are pages of results limited (like 20 or 50 per page) or 100's/ 1000's ?
Does the data set shift rapidly, so records are added as the user is
paging?
Are your result sets short and adding columns that repeat tolerable?
Do you have enough control over the client do do something out of band -- like custom HTTP headers, or a separate HTTP request that just asks for a query summary?
There really are hundreds of options depending on your needs. I don't know what you're using as a data store, but I wrote a post on getting row count efficiently. The issues there are very germane here, albeit from the DB perspective. It might help you get some perspective.
Im working on a web service that i want to be RESTful. I know about the CRUD way of doing things, but I have a few things that im not completly clear with. So this is the case:
I have a tracking service that collects some data in the browser (client) and then sends it off to the tracking server. There are 2 cases, one where the profile exists and one where it does not. Finally the service returns some elements that has to be injected to the DOM.
So basically i need 2 web services:
http://mydomain.tld/profiles/
http://mydomain.tld/elements/
Question 1:
Right now im only using GET, but im rewriting the server to support CRUD. So in that case i have to use POST if the profile does not exist. Something like http://mydomain.tld/profiles/ and then POST payload have the information to save. If the profile is existing i use PUT and http://mydomain.tld/profiles// and payload of PUT has data to save. All good, but problem is that as far as i understand, xmlhttp does not support PUT. Now is it ok to use POST even though its an update?
Question 2:
As said my service returns some elements to be injected into the DOM, when a track is made. Logically, to keep it RESTful, i guess that i would have to use POST/PUT to update the profile and then GET to get the elements to inject. But to save bandwidth and resources on the serverside, it makes more sense to return the elements with the POST/PUT to profiles, even though its a different resource. What are your take on this?
BR/Sune
EDIT:
Question 3:
In some cases i only want to update the profile and NOT receive back elements. Could i still use same resource and then using a payload parameter to specify if i want elements, e.g. "dont_receive_elements:true"
On question #1, are you sure that xmlhttp does not support "put"? I just ran http://www.mnot.net/javascript/xmlhttprequest/ on three browsers (Chrome, Firefox, IE) and according to the output, "put" was successful on all browsers. Following the information on http://www.slideshare.net/apigee/rest-design-webinar (and I highly recommend checking out the many Apigee videos and slideshows on restful API), "put" is recommended for the use case you mention.
But you may be able to avoid this issue entirely by thinking a little differently about your data. Is it possible to consider that you have a profile and that for each profile you have 0 or more sets of payload information? In this model the two cases are:
1. No profile exists, create profile with a POST on .../profiles/ Then add elements/tracking data with posts to .../profile/123/tracks/ (or .../profile/123/elements/)
2. Profile exists, just add the elements/tracking data
(Sorry without understanding your model in detail, it is hard to be very precise).
As for question #2 - going with a data model where a profile has 0 or more elements, you could update the profile (adding the necessary elements) and then return the updated profile (and its full graph of elements), saving you any additional gets.
More generally on question #2, as the developer of the API you have a fair amount of freedom in the REST world - if you are focused on making it easy and obvious for the consumers of your API then you are probably fine.
Bottom line: Check out www.apigee.com - they know much more than I.
#Richard - thanks alot for your links and feedback. The solution i came down to is to make the API simple and clean as you suggest in your comment, having seperate calls to each resouce.
Then to be able to save bandwidth and keep performance up, I made a "non-official" function in the API that works like a proxy internally and are called with a single GET, that updates a profile and returns an element. This, i know, is not very restful etc, but it handles my situation and is not part of the official API. The reason i need it to support GET for this i need to call it from javascript and cross domain.
I guess i could have solved the cross domain by using JSONP, but i would still have to make the API "unclean" :)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We all know that Meteor offers the miniMongo driver which seamlessly allows the client to access the persistent layer (MongoDB).
If any client can access the persistent API how does one secure his application?
What are the security mechanisms that Meteor provides and in what context should they be used?
When you create a app using meteor command, by default the app includes the following packages:
AUTOPUBLISH
INSECURE
Together, these mimic the effect of each client having full read/write access to the server's database. These are useful prototyping tools (development purposes only), but typically not appropriate for production applications. When you're ready for production release, just remove these packages.
To add more, Meteor supports Facebook / Twitter / and Much More packages to handle authentication, and the coolest is the Accounts-UI package
In the collections doc says:
Currently the client is given full write access to the collection.
They can execute arbitrary Mongo update commands. Once we build
authentication, you will be able to limit the client's direct access
to insert, update, and remove. We are also considering validators and
other ORM-like functionality.
If you are talking about restricting the client not to use any of your unauthorized insert/update/delete API, thats possible.
See their, todo app at https://github.com/meteor/meteor/tree/171816005fa2e263ba54d08d596e5b94dea47b0d/examples/todos
Also, they have now added a built in AUTH module, that lets you login and register. So its safe. As far as you are taking care of XSS , Valiations, client headers etc.
but you can anyday convert meteor app into fully working nodejs application by deploying to node. So if you know how to secure a nodejs application you should be able to secure meteor.
As of 0.6.4, during development mode, is_client and is_server blocks still both go to the client system. I can't say if these are segregated when you turn off development mode.
However, if they are not, a hacker might be able to gain insight from the system by review the blocks of if(Meteor.is_server ) code. That particularly concerns me, especially because I noted that I still at this point can't segregate Collections into separate files on client and server.
Update
Well, the point is don't put security related code in an is_server block in a non-server directory (i.e. - make sure it is in something under the /server .
I wanted to see if I was just nuts about not being able to segregate client and server Collections in the client and server directories. In fact there is no problem with this.
Here is my test. It's a simple example of the publish/subscribe model that seems to work fine.
http://goo.gl/E1c56