OneDrive REST API - Upload - Files > 4GB - rest

Uploading files greater than 4GB using the OneDrive REST API fails.
Sample request:
PUT https://apis.live.net/v5.0/folder.<removed>/files/test.vmdk HTTP/1.1
<removed>
Content-Length: 10000000000
Host: apis.live.net
Since it is now possible to upload files up to 10Gb using the OneDrive website and the Desktop client it would be great if this is also possible with the REST API.

We're getting this published on the documentation site in the next content refresh, but I wrote up a quick gist on how to upload files larger than the REST APIs 100MB limit.
https://gist.github.com/rgregg/37ba8929768a62131e85
For large files, the best results will be achieved by splitting the file into multiple fragments and uploading those fragments. That way if a connection is dropped after you uploaded 90% of the files (in smaller fragments) you can recover the upload with the last fragment instead of starting all over again.

Related

Aspera Node API /files/{id}/files endpoint not returning up to date data

I am working on a webapp for transferring files with Aspera. We are using AoC for the transfer server and an S3 bucket for storage.
When I upload a file to my s3 bucket using aspera connect everything appears to be successful, I see it in the bucket, and I see the new file in the directory when I run /files/browse on the parent folder.
I am refactoring my code to use the /files/{id}/files endpoint to list the directory because the documentation says it is faster compared to the /files/browse endpoint. After the upload is complete, when I run the /files/{id}/files GET request, the new file does not show up in the returned data right away. It only becomes available after a few minutes.
Is there some caching mechanism in place? I can't find anything about this in the documentation. When I make a transfer in the AoC dashboard everything updates right away.
Thanks,
Tim
Yes, the file-id base system uses an in-memory cache (redis).
This cache is updated when a new file is uploaded using Aspera. But for files movement directly on the storage, there is a daemon that will periodically scan and find new files.
If you want to bypass the cache, and have the API read the storage, you can add this header in the request:
X-Aspera-Cache-Control: no-cache
Another possibility is to trigger a scan by reading:
/files/{id}
for the folder id

What is best approach to make large static binary available available through HTTP endpoint in Go over Google App Cloud?

Due to size of file repeatedly hitting deadline error (https://www.shiftedup.com/2015/03/12/deadline-errors-60-seconds-or-less-in-google-app-engine ) and cannot host these 3 binary files ( available on 3 endpoints ) over CDN.
App Engine has two limits: 60 seconds and 32MB max per request. If you need to serve large files, you need to use Google Cloud Storage which supports files up to 5GB (June 2016). You can keep these files private and serve directly from the bucket to your client using a signed URL.

Finding latest TeamCity Backup via REST API

I found plenty of information and example about triggering TeamCity 8.1.2 backups via the REST API.
But leaving the backup files on the same server is pretty useless for disaster recovery.
So I'm looking for a way to copy over the generated backup file to another location.
My question is about finding the name of the latest available backup file via the REST API -
The Web GUI includes this information under "Last Backup Report" under the "Backup" page of the Server Administration.
I've dug through https://confluence.jetbrains.com/display/TCD8/REST+API#RESTAPI-DataBackup and the /httpAuth/app/rest/application.wadl on my server. I didn't find any mention of a way to get this info through the REST API.
I also managed to trigger a backup with a hope that perhaps the response gives this information, but it's not there - the response body is empty and the headers don't include this info.
Right now I intend to fetch the HTML page and extract this information from there, but this feels very hackish and fragile (the structure of the web page could change any time).
Is there a recommended way to get this information automatically?
Thanks.
JetBrains support got back to me with the right answer - I should use a POST method, not GET, even if the request body is empty.
Here is an example of a working request:
curl -u user:password --request POST http://localhost:8111/httpAuth/app/rest/server/backup?includeConfigs=true'&'includeDatabase=true'&'fileName=testBackup
And the response to that contains a plain file name in text: testBackup_20150108_141924.zip

Accept-Encoding headers on Cloudfront serving assets from Rails 3.0.x on Heroku Cedar

When I use my Rails app to directly serve my assets through Heroku's Cedar stack (ie. NOT through a CDN) they get gzip'd automatically. (See my previous question on why I'm confused about this)
Now, I'm trying to set up Cloudfront to serve these assets instead, and ideally, I'd like for them to be gzip'd as well. From what I've read, I thought that Cloudfront would pass on the Accept headers to my app, so they should be served up gzip'd if supported (just as they are when you make a direct request to the asset on heroku). But this isn't the case. The asset headers end up looking like this:
Age:510
Connection:keep-alive
Content-Length:178045
Content-Type:text/css
Date:Sun, 08 Jan 2012 18:55:13 GMT
Last-Modified:Sun, 08 Jan 2012 18:42:34 GMT
Server:nginx/0.7.67
Via:1.1 varnish, 1.0 7a0b4b3db0cc0d369fe1d6981bfb646a.cloudfront.net:11180 (CloudFront), 1.0 6af08f4042ec142b4b760ca4cd62041d.cloudfront.net:11180 (CloudFront)
X-Amz-Cf-Id:2b205edf4e9ef000a31a0208ca68f4e15b746eb430cde2ba5cc4b7dff4ba41a76c24f43cf498be02,8d5863a42eea452f86831a02f3eb648b26fe07013b08b95950f15ef8ba275822e1eb3b7ed2550d01
X-Cache:Hit from cloudfront
X-Varnish:2130919357
There's no mention of encoding here, and when I view the plain file, it's not gzip'd. So I'm wondering what I need to do here to get Cloudfront to request a gzip'd version of the asset from my app so that it can serve this to the client.
This post says you need to manually gzip and upload the file, but I don't see why that should be necessary. For one, it's annoying, and two, wouldn't it request the file the same as my browser directly? So why wouldn't it just serve up the gzip'd file as it does by default in my app?
Any tips on getting gzip'ng working properly would be great. I'd like to not have to manually gzip my files and upload them if possible.
Cedar served files do NOT get GZipped by the stack, Cedar only serves whatever you have in the application code. See the documentation:
Since requests to Cedar apps are made directly to the application
server – not proxied through an HTTP server like nginx – any
compression of responses must be done within your application. For
Rack apps, this can be accomplished with the Rack::Deflater
middleware. For gzipped static assets, make sure that Rack::Deflater
is loaded before ActionDispatch::Static in your middleware stack.
Therefore the GZipping your seeing either is a false header, or is coming from somewhere else which. Therefore if you've just pushed files to Cloudfront then you're just seeing the same thing.
If you're looking at serving zipped assets via CDN I would really recommend looking at bumping to Rails 3.1 and using the Asset pipeline. Not only will this give you more control over your assets bu also give you a much easier path to serving them over a CDN.

Server Error while file upload in asp.net mvc2

When i upload a file of size 35MB or more it gives
Server Error
404 - File or directory not found.
The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable.
and if i upload a file of size 25MB or less then that it works fine. This issue occurred only when i deploy it on a server if i run it on my local system it works perfectly. One thing i want to tell that i have override httpruntime setting in my web.config and its have
<httpRuntime maxRequestLength="3145728" executionTimeout="1200" requestValidationMode="2.0"/>
what may be the issue?
Luckily, there is an alternate solution that can be enabled at the site level rather than server-wide.
source: http://www.webtrenches.com/post.cfm/iis7-file-upload-size-limits
After two days i found answer to my question and the issue is that IIS7 have maximum file upload limit which is 30000000 Bytes which is around 29MB