What are the options in a web api to indicate that the returned data is paged and there is further data available.
ASP.Net Web API with OData uses a syntax similar to the following:
{
"odata.metadata":"http://myapi.com/api/$metadata#MyResource","value":[
{
"ID":1,"Name":"foo"
},
...
{
"ID":100,"Name":"bar"
}
],"odata.nextLink":"http://myapi.com/api/MyResource?$skip=20"
}
Are there any other ways to indicate the link to the next/previous 'page' of data without using a metadata wrapper around the results. Can this be achieved by using custom response headers instead?
Let's take a step back and think about WebAPI. WebAPI in essence is a raw data delivery mechanism. It's great for making an API and it elevates separation of concerns to a pretty good height (specifically eliminating UI concerns).
Using Web API, however, doesn't really change core of the issue you are facing. You're asking "how do I want to query my data store in an performant manner and return the data to the client efficiently?" Your decisions here really parallel the same question when building a more traditional web app.
As you noted, oData is one method to return this information. The benefit here is it's well known and well defined. The body of questions/blogs/articles on the topic is growing rapidly. The wrapper doesn't add any meaningful overhead.
Yet, oData is by no means the only way you can do this. We've had to cope with this since software has been displaying search results. It's tough to give you specific advice without really understanding your scenario. Here are some questions that bubbled up as I read your question :
Are your results sets huge but users only see the first one or two
pages?
Or do user tend to page through all of the results?
Are pages of results limited (like 20 or 50 per page) or 100's/ 1000's ?
Does the data set shift rapidly, so records are added as the user is
paging?
Are your result sets short and adding columns that repeat tolerable?
Do you have enough control over the client do do something out of band -- like custom HTTP headers, or a separate HTTP request that just asks for a query summary?
There really are hundreds of options depending on your needs. I don't know what you're using as a data store, but I wrote a post on getting row count efficiently. The issues there are very germane here, albeit from the DB perspective. It might help you get some perspective.
Related
Currently I have a dotnetcore WebApi that is serving up videos. The videos are stored in a SQL server table as a varbinary(MAX). This was working however I was reading that to support on IOS safari we needed to accept the ranges header, so I have added support for this (I think).
However now I am noticing two things (could be unrelated):
1) Whenever a call is made to this API the CPU throttles to 100%. I can only assume that is EntityFramework querying the db for a 25MB file. Seems crazy but the API is doing nothing else? Can this be improved as the server just grinds.
2) Multiple requests are made to the API with different range bytes requested. However my api in turn queries the db on each request and so sends the CPU into overdrive for a long period.
Is there a better way of handling range requests when querying for a large object?
If you ask me, EF is not really well suited for this, it's too clunky and resources consuming. You can write your own T-SQL using something like substring. This being said, from a practical point of view, depending on how many and how big these files are and how many users you have, I would not go with such a solution.
I don't think a SQL database should be how you store your data at all for this.
You could start doing some research on how netflix does it: https://www.techhive.com/article/2158040/how-netflix-streams-movies-to-your-tv.html
You probably want something like that, a CDN system, some sort of caching. Your way of doing it now might work while you build it, with one or two users but if this is an API used by lots of people, you will quickly find out that it won't scale.
I have a rest service which receives a customer's ID and an input, acording to this the services response something different, it's basically a menu, but the reponse could be different (depends on the customers input), so I need to stress this service to see how many request the server can handle and try to determine the max TPS, but since is a flow I don't know how I can simulate this, any idea or page that can be useful?
Thanks in advance for your help
What you mostly need is an understanding of how to handle dynamic parameters. Imagine you are simulating a user which goes on a blog, views a random blog article and then posts a comment about it.
It's all about designing a user which has a dynamic behavior, which changes depending on variables or server output. JMeter supports this kind of simulation very well by providing dozen of useful components like:
CSV Datasets,
Regexp Variable extractors,
and more.
We have written an article which explains how to simulate users with a dynamic behavior. It's very similar to what you would do in JMeter since OctoPerf is based on JMeter, with a Web UI built on top.
Does anyone have experiences with programmatic exports of data in conjunction with BaaS providers like e.g. parse.com or StackMob?
I am aware that both providers (as far as I can tell from the marketing talk) offer a REST API which will allow for queries against the database, not only to be used by mobile clients but also by e.g. custom web apps.
I am also aware that both providers offer a manual export of data (parse.com via their web interface, StackMob via support).
But lets say I would like to dump all data nightly, so that I can import it into a reporting system for instance. Or maybe simply to have an up-to-date backup.
In this case, I would need a programmatic way to export/replicate the data stored in the backend. Manual exports are not an option for obvious reasons.
The REST APIs offered however seem to be designed for specific queries, not for mass reads (performance?). Let alone the pricing - I assume none of the providers would be happy about a nightly X Gigabyte data export via their REST API, so their probably will be a price tag.
I just couldn't find any specific information on this topic so far, so I was wondering if anyone else has already gone through this. Also, any suggestions on StackMob/parse alternatives are welcome, especially if related to the data export topic.
Cheers, Alex
Did you see the section of the Parse REST API on Batch operations? Batch operations reduce the number of API calls needed to grab data so that you are not using a call for every row you retrieve. Keep in mind that there is still a limit (the default is 100, but you can set it to a maximum of 1000). That means you are still limited to pulling down 1000 rows per API call.
I can't comment on StackMob because I haven't used it. At my present job, we are using Parse and we wrote a C# app which compares the data in a Parse class with a SQL table and pulls down any changes.
Im working on a web service that i want to be RESTful. I know about the CRUD way of doing things, but I have a few things that im not completly clear with. So this is the case:
I have a tracking service that collects some data in the browser (client) and then sends it off to the tracking server. There are 2 cases, one where the profile exists and one where it does not. Finally the service returns some elements that has to be injected to the DOM.
So basically i need 2 web services:
http://mydomain.tld/profiles/
http://mydomain.tld/elements/
Question 1:
Right now im only using GET, but im rewriting the server to support CRUD. So in that case i have to use POST if the profile does not exist. Something like http://mydomain.tld/profiles/ and then POST payload have the information to save. If the profile is existing i use PUT and http://mydomain.tld/profiles// and payload of PUT has data to save. All good, but problem is that as far as i understand, xmlhttp does not support PUT. Now is it ok to use POST even though its an update?
Question 2:
As said my service returns some elements to be injected into the DOM, when a track is made. Logically, to keep it RESTful, i guess that i would have to use POST/PUT to update the profile and then GET to get the elements to inject. But to save bandwidth and resources on the serverside, it makes more sense to return the elements with the POST/PUT to profiles, even though its a different resource. What are your take on this?
BR/Sune
EDIT:
Question 3:
In some cases i only want to update the profile and NOT receive back elements. Could i still use same resource and then using a payload parameter to specify if i want elements, e.g. "dont_receive_elements:true"
On question #1, are you sure that xmlhttp does not support "put"? I just ran http://www.mnot.net/javascript/xmlhttprequest/ on three browsers (Chrome, Firefox, IE) and according to the output, "put" was successful on all browsers. Following the information on http://www.slideshare.net/apigee/rest-design-webinar (and I highly recommend checking out the many Apigee videos and slideshows on restful API), "put" is recommended for the use case you mention.
But you may be able to avoid this issue entirely by thinking a little differently about your data. Is it possible to consider that you have a profile and that for each profile you have 0 or more sets of payload information? In this model the two cases are:
1. No profile exists, create profile with a POST on .../profiles/ Then add elements/tracking data with posts to .../profile/123/tracks/ (or .../profile/123/elements/)
2. Profile exists, just add the elements/tracking data
(Sorry without understanding your model in detail, it is hard to be very precise).
As for question #2 - going with a data model where a profile has 0 or more elements, you could update the profile (adding the necessary elements) and then return the updated profile (and its full graph of elements), saving you any additional gets.
More generally on question #2, as the developer of the API you have a fair amount of freedom in the REST world - if you are focused on making it easy and obvious for the consumers of your API then you are probably fine.
Bottom line: Check out www.apigee.com - they know much more than I.
#Richard - thanks alot for your links and feedback. The solution i came down to is to make the API simple and clean as you suggest in your comment, having seperate calls to each resouce.
Then to be able to save bandwidth and keep performance up, I made a "non-official" function in the API that works like a proxy internally and are called with a single GET, that updates a profile and returns an element. This, i know, is not very restful etc, but it handles my situation and is not part of the official API. The reason i need it to support GET for this i need to call it from javascript and cross domain.
I guess i could have solved the cross domain by using JSONP, but i would still have to make the API "unclean" :)
Certain forms are too complicated to have them fit on one page. If, for example, a form involves large amounts of structured data, such as picking locations on a map, scheduling events in a calendar widget, or having certain parts of a form change depending on earlier input, it is of value to be able to break up a certain form over multiple pages.
This is easy to do with dynamic web pages and Javascript, as one would simply create a tab widget with different pages, and the actual submitted form would contain the whole tab widget and all of its input fields, yielding a single POST request for the entire operation.
Sometimes, however, it takes a long time to generate certain input fields; they might even be computationally intensive even after the page has been generated, taxing the low-end computer user's browser. Additionally, it becomes difficult or impossible to create forms that adapt themselves based on earlier input.
It therefore becomes necessary to split up a certain form over multiple full page requests.
This can prove to be difficult, especially since the first page of a form will POST to /location/a, which will issue a redirect to /location/b and requested as GET by the client. Passing the stored form data from POST /location/a to GET /location/b is where the difficulty lies.
Erwin Vervaet, the creator of Spring Web Flow (A subproject of the Spring framework, mostly known for its dependency injection capabilities) once wrote a blog article demonstrating this functionality in said framework, and also comparing it to the Lift Web Framework which implemented similar functionality. He then presents a challenge to other web frameworks, which is further described in a later article.
How would Yesod face this problem, especially considering its stateless REST-based nature?
Firstly, there's no pre-built solution to this in existence yet (that I'm aware of at least). And I'm not familiar with how the other frameworks mentioned solve the problem. So what I say here is pretty much conjecture. I'm fairly certain it would work, however.
The crux of the issue here is encoding page A's POST parameters into the GET request for page B. The simplest way to do that would be to stick page A's POST parameters into a session variable. However, doing so would break navigation pretty thoroughly: back/forward wouldn't work at all as described.
So we come back to REST: we need to encode the POST parameters into the request itself. That really means putting the information in either the requested path, or the query string. And the query string probably makes the most sense.
I'd be concerned about putting the raw POST parameters into the query string, as that would allow any proxy server to easily snoop the contents. So I'd like to leverage the existing cryptography from clientsession. In other words, we'll stick a signed, encrypted version of the previous form submission in a query string parameter.
To make it a bit more concrete:
User goes to page A via GET.
User submits page A via POST.
Server validates form submission, gets a value, serializes it, encrypts/hashes it.
User is redirected to page B as a GET, with a query string parameter containing the encrypted/hashed value from page A.
Continue this process as many times as desired.
On the final page, you can decrypt the query string parameter and have all of the form submissions.
This looks like it would be a fun add-on package to write if anyone's interested.