What is maximum file size to upload via Dropbox API v2? - dropbox-api

I am trying to upload files to Dropbox using API v2 /upload endpoint. Sometimes I get a 413 error response from the Dropbox server (Request entity too large).
Unfortunately, maximum file size is not described in the documentation, maybe someone know about it?

Sorry, we'll add this to the documentation.
I'm pretty sure the limit is unchanged from API v1, so it's 150MB. Beyond that, you should use /upload/session/*.

Related

Best approach to upload the file via REST api from API gateway

User Case: Customer can upload the file from the public REST api to our S3 bucket and then we can process the file using downstream services.
After doing some research I am able to find 3 ways to do it:
Uploading using OCTET-STREAM file type
Upload the file using form-data request
Upload the file using the pre-signed URL
In first 2 cases user will send the binary file and we will upload the file to S3 after file validation.
In the 3rd method user have to hit 3 apis. First API to get the S3 pre-signed URL which will give access to the user to upload the file to S3. In second hit user will upload the file to that s3 pre-signed URL. After the user complete the upload he will send the request to process the file.
Do we have any security issues with step 3? As user can misuse the pre-signed URL with malicious file.
Which of these method is best according to industry practice?
Details of each approach:
1. Uploading using OCTET-STREAM file type
Pros:
This method is good to upload file types which can be opened in some application such as xlsx.
1 API hit. Direct file upload
Cons:
This option is not suitable to upload multiple files. If in future we need to support multiple file upload this should be changed to multipart/form-data (A2).
No metadata can be send as body parameter. Metadata can be send in headers.
2. Upload the file using form-data request
User will upload the file with the API request by attaching it as multipart form.
Pros
We can send multiple files at the same time.
We can send extra parameters in the body.
3. Upload the file using the pre-signed URL
Cons
Customer have to hit the 3 APIs to upload the file. (2 API hits to upload and then 1 more API hit to check the process the file)
If you want them to load data into a bucket, the best way will almost always be the pre-signed URL. This gives you complete control over how you hand out access to the bucket, but also allows them to directly upload into the bucket when they have the access.
In the first two examples the user can send malicious data to your API, potentially DOSing the server / incurring costs on you to manage the payloads as you have no control over access (it is public).
In the third case they can request a URL from you, but that is it, other than spamming you for requests for URLs, unless you grant them a URL they can't access the bucket or do anything else. This seems much better than spamming your upload with large junk files and having you process them before you decide you didn't want them anyway.
Finally using the pre-signed URL is the pattern AWS would expect you to use, and so have a lot of support for managing the access, roles, logging and monitoring etc that you would want to put around this service. When you are standing up the API yourself this will all be up to you to manage.

Can I get/put OneNote RAW page data?

I'm trying to backup/restore the OneNote contents on a business site.
Currently the API returns the HTML translated content of a Page, but some extensions or Ink data is missing.
Well, I did see Beta status API for Ink data. But why don't just get the whole data as it is, and restore it as it was?
I also know OneNote data is synced on OneDrive storage, but downloading as it is and restore it with Graph API doesn't work.
I need to parse the HTML and download the resources again and if there's some missing content then I have to wait for another beta API. And when restore I have to construct a multi-part request.
Can you please provide additional API for downloading/uploading the RAW content?
Thanks in advance.
This isn't supported by the OneNote API. In theory, you could use the OneDrive API to download the notebook as a folder or section as a file, which would have all the information.

Analytics Metdata API (Java)

I am trying to get the full list of allowed Dimensions and Metrics from the Metadata API and finding a problem accessing it as I am using the Reporting API v4 anyone has any idea how I can make something like this Metadata.Columns.List("ga").execute() work?
Finally what I did is I used the v3 client library for the Metadata API (separately from the v4 of the Reporting API from the corresponding client library) and been able to work with analytics.metadata().columns().list("ga").execute();

PUT object to S3 using v4 authentication without hashing the payload

I am working on a project to upload objects to S3 using java code. There are some external restrictions that limit my implementation and overall I'm not sure if S3 supports what I'm trying to do.
The restrictions are:
Use V4 authentication
header authentication, not query parameter
REST API, not AWS java SDK
Payload is not hashed (no SHA-256)
That last requirement is because we have hardware support that streams the data directly from storage, so the driving code never touches the data.
Apparently with query parameter authentication I can substitute 'UNSIGNED-PAYLOAD' for the payload hash, but not so with header based authentication.
So my question is whether or not there is any way to upload an object to S3 using the REST API, v4 signature and no hash (SHA-256 or other) on the data itself.
Thanks!
No, according to this post on Amazon's forums:
Re: https://forums.aws.amazon.com/message.jspa?messageID=573632
UNSIGNED-PAYLOAD can be used only with a query-string authentication.
If you use Authorization header authentication, it cannot be used. As
an option, you can use chunked transfer, so will have to calculate
hashes for small chunks of data than can be buffered for hashing.
Also, you can still use older Signature V2 , though it won't work with
regions created after 30-jan-2014.
It looks like you can do this with v2 signatures using the header method but, as mentioned above, only to endpoints created before Jan 30th, 2014.
See: http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationStringToSign
You can upload files using POST and it does not require payload hash. But with POST file size is limited to 5GB.
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-authentication-HTTPPOST.html

Best practices to redirect a HTTP POST to my REST API towards my S3 bucket?

Say we want a REST API to support file uploads, and we want uploads to be done directly on S3.
According to this solution Amazon S3 direct file upload from client browser - private key disclosure, we have to create POLICY and SIGNATURE for user to be allowed to upload to S3.
However, we want a single entry point for the API, including uploads.
Can we:
1. in our API, catch POST https://www.example.org/users/1234/objects
2. calculate POLICY and SIGNATURE to allow direct upload to S3
3. return a 307 "Temporary Redirect" to https://s3-bucket.s3.amazonaws.com
How to pass POLICY and SIGNATURE in the redirect?
What is best practice here?
You dont redirect, instead your API should return the policy and signature in the response (say in JSON).
Then the browser can use these values to directly upload to S3 as in the document. This is a two step process.