Can firebase server timestamps be written without making two requests? - rest

The Firebase REST API describes how to write server values (currently only timestamps are supported) at a location, but it appears that one must submit a separate request in order to do this. Is there (or has there been planned) any way of setting timestamps (like createdAt) at the same time one submits other data? Seems like this would really help reduce traffic and improve performance.

Sure, this is possible. The documentation is admittedly a little unclear, but all you need to do is include the {".sv": "timestamp"} object as part of your JSON payload. Here's an example that saves it to a key timestamp.
curl -X PUT -d '{"something":"something", "timestamp":{".sv": "timestamp"}}' https://abc.firebaseio-demo.com/.json

Related

REST API containing POST and PUT/PATCH calling a compute server generating results files

The server application I'm implementing generates calculation results and stores these in result files in directories on the server. For example, customer/project/scenario/resultfiles. I want to design and implement a resilient REST implementation to retrieve the result files for display in the client browser, delete results files, customers etc and to create result files within a scenario for calculation parameters sent to the server. And possibly to do sensitivity analysis to generate result files within a scenario by varying calculation parameters.
I can use GET to retrieve these files using a URL with query string appname/?customerId=xxx&projectId=xxx etc And DELETE on the directory structure and files also using query strings. What I'm unclear about is the best REST approach to call functions implementing various calculations on the server.
Perhaps this should be a POST for the initial calculation in a scenario as this is creating the results files? Maybe a PUT or a PATCH for the sensitivity analysis or other partial recalculations as this is modifying results in an existing scenario?
There's a fair bit of online discussion about PUT vs PATCH vs POST used for database related activities. I could work up a REST approach based on what I've read for REST database interactions but if there's already standard practice on how to do calculations through a REST API I'd rather use that.
Perhaps this should be a POST for the initial calculation in a scenario as this is creating the results files? Maybe a PUT or a PATCH for the sensitivity analysis or other partial recalculations as this is modifying results in an existing scenario?
You can always just use POST. If we were using HTML representations of resources to guide the client through the protocol, we'd be doing that by following links and submitting forms. In HTML, submitting forms is limited to GET and POSt.
PUT and PATCH have more tightly constrained semantics than POST. Specifically, they are methods that request that the server make its representation match the clients representation (for PUT, we send the entire replacement representation; for PATCH, we just send the changes made by the client).
Technically, there's nothing wrong with the server not accepting the offered edits as is:
A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being sent in a 200 (OK) response. However, there is no guarantee that such a state change will be observable, since the target resource might be acted upon by other user agents in parallel, or might be subject to dynamic processing by the origin server, before any subsequent GET is received. A successful response only implies that the user agent's intent was achieved at the time of its processing by the origin server.
So the server could accept the client's edits, and then immediately apply additional edits of its own.

Get request with very long query string. (CSV)

I'm looking to implement an API call where you can specify any combination of up to ~6000 ids to get data from the server. Trouble is it's quite likely that a request will contain a large number of id's - say around 4000. The query string would therefore be very long and possibly too long for the browser?
I wonder, what would be the best approach? I could use a POST but it doesn't really fit with REST - but then again I'm not too fussed about that. Is there a better way of doing this?
In this case, POST really is the solution. From a REST perspective and also from an optimization perspective, if you expect this call to be invoked multiple times with the same list of IDs, you may want to consider one POST call to create a server-side named/defined list and then for subsequent GET requests to reference the created list so that this data doesn't have to be repeated each and every time.

Best practice for file-based search in rest service

I'm helping build a similarity search service for files. One way to search for something is with a GET request, by giving a file's URL, but I also need to allow clients to send the file directly. I have to following options:
Make the client send a GET request with a Payload; it seems this is not recommended -- HTTP GET with request body
Use something else than GET (maybe a PUT?) for file-based search. The problem is none of the other HTTP methods seems to suit this purpose.
What option would suit best here? I'm not an expert in this field, and I can't figure out what's the right thing to do in this situation.
Here is the rule I have always followed with REST.
GET - only querying data and returning a data set.
POST - Creating data in the database
PUT - Modifying data
DELETE - Destroy data in the database.
If you are sending a payload for search params, you can do a GET and put those params (assuming they are name/value pairs) in the query string of the URI.
i.e. http://my.simsearch.com?param1=first&param2=second ...
If you are actually going to change the database then a POST or PUT is in order.
I hope this helps.
I ended up sending the payload with a GET request. Even though it's not really recommended, hopefully no libraries will complain about this.

REST Webservices - GET but for multiple objects

I have already gone through this
How best to design a REST API with multiple filters?
This does help when you have say 3 or 4 filtering criteria and you can accomodate that in the query String.
However let's take this example
You want to get call details about 20 telephone numbers, between a certain startdate and enddate.
Now I do agree ideally one should be advised to make individual queries for each number and then on the client side collate all data.
However for certain Live systems that would mean 20 rounds of queries on the switches or cdr databases. That is 20 request-response cycles plus the client having to collate and order them again based on time. While in the database level it would have been a simple single query that can return an ordered data and transformed back into a REST xml response that the client can embed on their system.
If we are to use GET the query string will get really confusing and has a limit as well.
Any suggestions to get around this issue.
Of course we can send a POST request with an xml having all numbers in it but that is against REST Get principles.
In case of GET use OData queries. For example when your start and end dates represented as numbers (unix time) URI could look like:
GET http://operatorcalls.com/Calls/Details?$filter=Date le 1342699200 and Date gt 1342526400
What you seem to be missing is an important concept of REST, caching. This can be done, as an example, in the browser, for a single client. Or it can be done as a shared cache between all the clients and the live production system (whatever it may be). Thus reducing queries against a live production system, or in your example, actual switches.
You should really take some time to read Fieldings thesis, and understand that REST is an architectural style.
I found a solution here Handling multiple parameters in a URI (RESTfully) in Java
but not quite happy with it.
So in effect we will end up using /cdr?numbers=number1,number2,number3 ...
However not too pleased with it as there is a limit to Query String in the url and also doesn't really seem to be an elegant solution. Anyone found any solution to this in their own implementation?
Basically not using POST for this kind of Fetch requests and also not using cumbresome and lengthy Query Strings.
We are using Jersey but also open to using CXF or Spring REST

RESTful way to create multiple items in one request

I am working on a small client server program to collect orders. I want to do this in a "REST(ful) way".
What I want to do is:
Collect all orderlines (product and quantity) and send the complete order to the server
At the moment I see two options to do this:
Send each orderline to the server: POST qty and product_id
I actually don't want to do this because I want to limit the number of requests to the server so option 2:
Collect all the orderlines and send them to the server at once.
How should I implement option 2? a couple of ideas I have is:
Wrap all orderlines in a JSON object and send this to the server or use an array to post the orderlines.
Is it a good idea or good practice to implement option 2, and if so how should I do it.
What is good practice?
I believe that another correct way to approach this would be to create another resource that represents your collection of resources.
Example, imagine that we have an endpoint like /api/sheep/{id} and we can POST to /api/sheep to create a sheep resource.
Now, if we want to support bulk creation, we should consider a new flock resource at /api/flock (or /api/<your-resource>-collection if you lack a better meaningful name). Remember that resources don't need to map to your database or app models. This is a common misconception.
Resources are a higher level representation, unrelated with your data. Operating on a resource can have significant side effects, like firing an alert to a user, updating other related data, initiating a long lived process, etc. For example, we could map a file system or even the unix ps command as a REST API.
I think it is safe to assume that operating a resource may also mean to create several other entities as a side effect.
Although bulk operations (e.g. batch create) are essential in many systems, they are not formally addressed by the RESTful architecture style.
I found that POSTing a collection as you suggested basically works, but problems arise when you need to report failures in response to such a request. Such problems are worse when multiple failures occur for different causes or when the server doesn't support transactions.
My suggestion to you is that if there is no performance problem, for example when the service provider is on the LAN (not WAN) or the data is relatively small, it's worth it to send 100 POST requests to the server. Keep it simple, start with separate requests and if you have a performance problem try to optimize.
Facebook explains how to do this: https://developers.facebook.com/docs/graph-api/making-multiple-requests
Simple batched requests
The batch API takes in an array of logical HTTP requests represented
as JSON arrays - each request has a method (corresponding to HTTP
method GET/PUT/POST/DELETE etc.), a relative_url (the portion of the
URL after graph.facebook.com), optional headers array (corresponding
to HTTP headers) and an optional body (for POST and PUT requests). The
Batch API returns an array of logical HTTP responses represented as
JSON arrays - each response has a status code, an optional headers
array and an optional body (which is a JSON encoded string).
Your idea seems valid to me. The implementation is a matter of your preference. You can use JSON or just parameters for this ("order_lines[]" array) and do
POST /orders
Since you are going to create more resources at once in a single action (order and its lines) it's vital to validate each and every of them and save them only if all of them pass validation, ie. you should do it in a transaction.
I've actually been wrestling with this lately, and here's what I'm working towards.
If a POST that adds multiple resources succeeds, return a 200 OK (I was considering a 201, but the user ultimately doesn't land on a resource that was created) along with a page that displays all resources that were added, either in read-only or editable fashion. For instance, a user is able to select and POST multiple images to a gallery using a form comprising only a single file input. If the POST request succeeds in its entirety the user is presented with a set of forms for each image resource representation created that allows them to specify more details about each (name, description, etc).
In the event that one or more resources fails to be created, the POST handler aborts all processing and appends each individual error message to an array. Then, a 419 Conflict is returned and the user is routed to a 419 Conflict error page that presents the contents of the error array, as well as a way back to the form that was submitted.
I guess it's better to send separate requests within single connection. Of course, your web-server should support it
You won't want to send the HTTP headers for 100 orderlines. You neither want to generate any more requests than necessary.
Send the whole order in one JSON object to the server, to: server/order or server/order/new.
Return something that points to: server/order/order_id
Also consider using CREATE PUT instead of POST