I use filepicker.io on the client to get the FPFile object, then I send this object to the server. How can the server verify that the FPFile object is authentic? Shouldn't there be an easy way to do this, say, with an HMAC signature? I can't find any documentation for this.
UPDATE:
Ideally, I would like to verify that the file was:
Uploaded using my API key.
Uploaded by the same guy who is sending me the FPFile in an AJAX request.
The best way to do this would be via validating that the filepicker url and metadata in the FPFile matches what is returned via the /metadata endpoint, for example
https://www.filepicker.io/api/file/JhJKMtnRDW9uLYcnkRKW/metadata
Related
I'm trying to write an API that would receive a PDF file, process it, and send the results back to the user within the same request.
I'm confused as for which request should be used for this task, as the user is trying to GET a response from the server, but they also POST a file.
In this case, should/can I add a PDF file as a parameter to the GET request, or should I use a POST request - but if the latter, how does the user get the processed result?
GET is usually used to get info that is already on the server, while POST is to send information to the server, and the server responds based on that information.
I think your question should be more focused on whether to use POST or PUT. Take a look at this guide, and act according to your case.
I'm creating a headless API that's going to drive an Angular front end. I'm having a bit of trouble figuring out how I should handle user authentication though.
Obviously the API should run over SSL, but the question that's coming up is how should I send the request that contains the user's password: over GET or POST. It's a RESTFUL API, so what I'm doing is retrieving information meaning it should get a GET request. But sending the password over get means it's part of the URI, right? I know even a GET request is encrypted over HTTPS, but is that still the correct way? Or is this a case to break from RESTFUL and have the data in the body or something (can a GET request have data in the body?).
If you pass the credentials in a request header, you will be fine with either a GET or POST request. You have the option of using the established Authorization header with your choice of authentication scheme, or you can create custom headers that are specific to your API.
When using header fields as a means of communicating credentials, you do not need to fear the credentials being written to the access log as headers are not included in that log. Using header fields also conforms to REST standards, and should actually be utilized to communicate any meta-data relevant to the resource request/response. Such meta-data can include, but is not limited to, information like: collection size, pagination details, or locations of related resources.
In summary, always use header fields as a means of authentication/authorization.
mostly GET request will bind data in URL itself... so it is more redable than POST..
so if it is GET, there is a possibility to alive HISTORY LOG
Using ?user=myUsername&pass=MyPasswort is exactly like using a GET based form and, while the Referer issue can be contained, the problems regarding logs and history remain.
Sending any kind of sensitive data over GET is dangerous, even if it is HTTPS. These data might end up in log files at the server and will be included in the Referer header in links to or includes from other sides. They will also be saved in the history of the browser so an attacker might try to guess and verify the original contents of the link with an attack against the history.
You could send a data body with a get request too but this isn't supported by all libraries I guess.
Better to use POST or request headers. Look at other APIs and how they are handling it.
But you could still use GET with basic authentication like here: http://restcookbook.com/Basics/loggingin/
We have some IoT sensors that POST json payloads to an endpoint. They are configured with only a HTTPS URL to send to, no ability to setup authentication etc.
We need basic ability to see which sensor is sending data, and loosely prevent anyone from sending payloads. Full authentication will not be possible.
It was suggested we could put a token in the path and use it as a super basic API Key. I was wondering what the best format for the route should be...
/api/events/_ingest/api-key
/api/producer/api-key/events/_ingest
I was wondering what the best format for the route should be: /api/events/_ingest/api-key or /api/producer/api-key/events/_ingest
There's no best approach here, both are really bad. The API key does not belong to the URL. It should be sent in the standard Authorization HTTP header.
Once you mentioned in the comments that it will be something temporary, you could try a query parameter. It's still bad though. But you will be able to reuse this same route later, just moving the API key to a HTTP header, when your clients support it:
/api/events/_ingest?api-key=somecoolhashgoeshere
Say we want a REST API to support file uploads, and we want uploads to be done directly on S3.
According to this solution Amazon S3 direct file upload from client browser - private key disclosure, we have to create POLICY and SIGNATURE for user to be allowed to upload to S3.
However, we want a single entry point for the API, including uploads.
Can we:
1. in our API, catch POST https://www.example.org/users/1234/objects
2. calculate POLICY and SIGNATURE to allow direct upload to S3
3. return a 307 "Temporary Redirect" to https://s3-bucket.s3.amazonaws.com
How to pass POLICY and SIGNATURE in the redirect?
What is best practice here?
You dont redirect, instead your API should return the policy and signature in the response (say in JSON).
Then the browser can use these values to directly upload to S3 as in the document. This is a two step process.
I'm working on a REST webservice, and in particular authentication methods for browser-based requests. (using JsonP or Cross-domain XHR requests/XDomainRequest).
I've done some research in OAuth, and also Amazon's AWS. The big drawbacks of both is that I need to do either of the following:
Store secret tokens in the browser
Let a server-side script handle the signing. Basically I'd first to a request to a server of mine to get a specific pre-signed javascript request, which I'll use to connect to the real REST server.
What are some other options or suggestions?
Well, the only true answer here is proxying through a server, using sessions/cookies to authenticate and of course use SSL. Sorry for answering my own question.
Yes, jsonp call-authentication is tough, because the browser-client needs to know the shared secret.
An option would be to make the end-point anonymous (no authentication necessary). This comes with other security wholes (server is open for attacks, anyone can call it). But you could handle this by either only exposing very limited resource and/or using rate-limiting. With rate-limiting only a certain number of calls are allowed by one client in a certain range of time. It works by identifying the client (e.g. by source-ips or other client footprints).
I once experimented with one-time tokens, but they all somewhat failed because you have the problem of getting the token itself and protecting multiple retrievals of the token by bots (which comes again to the need of rate-limiting).
I havent tried this myself but you can try the following..(I am pretty sure i will get some feedback)
On the server side, generate a timestamp. Using HMAC-SHA256 an generate a key for that time stamp using a password and send the generated key and time stamp in the html.
When you make the AJAX call to the web service(assuming it is a different server) send the key and the time stamp along with the request. Check if timestamp is within a 5-15 minutes..
if it is do do the HMAC-SHA256 with the same password and key if the key generated is same.
Also on the client side you will have to check if your timestamp is still valid before making the call..
You can generate the key using the following url..
http://buchananweb.co.uk/security01.aspx