We're noticing some issues with Filepicker.io and the Picasa integration. It seems that large images (over ~ 2MB) aren't being processed - the POST to www.filepicker.io/api/store/ returns The specified bucket is not valid. for those files. Smaller size files process just fine.
Not sure where the issue lies. The response would indicate an issue with our S3 bucket perhaps but we've been able to process large Computer uploads without issue. Could it be a limitation in the Picasa Web API? Any information would be helpful. Thanks.
Related
I'm trying to build a website on Squarespace, in which the site links to a database file. It's stored in a standard file system on a server tower with cluster. No SQL architecture or anything that I explicitly know of. Unfortunately Google Drive isn't an option due to the size of the file ( > 200 GB). I'm rather lost due to the size constraint -- does anyone have an idea about how to do this? Can I set up some sort of server query using a link on the site? Can I upload the file from my computer and store it somewhere in the backend? Thanks.
"...the size of the file ( > 200 GB)..."
Unfortunately, Squarespace's own upload limitations are far below this for the two places where files like that can be stored: file storage (20MB) and developer-mode '/assets' folder (10MB). The CSS-/Style-related storage only supports images (and likely has a limit of less than 20MB). Digital download products can be 300MB (still to small for your file) and likely can't be linked-to and accessed as you'd need for your application.
"...Can I set up some sort of server query using a link on the site?..."
If you mean a query on some other service besides Squarespace which connects to the file hosted on your Squarespace site, the answer is no simply because there's no way to upload the file to Squarespace due to its size. If, however, your mean a query from your Squarespace site to the file hosted elsewhere, then this must be done using JavaScript and done entirely client-side due to Squarespace's lack of support for server-side languages.
"...Can I upload the file from my computer and store it somewhere in the backend?..."
See the options mentioned above, though all have file size limits below that of your file.
If you are able to utilize the file on your site using client-size/front-end JavaScript only, then perhaps you could host the file on Amazon S3 or other such provider and access it that way.
Using Standard Verizon CDN. Origin is Blob Storage. Accept-Type is always set, but only certain JS content returned is compressed with gzip. I've tried going through the CDN compression troubleshooting doc (https://learn.microsoft.com/en-us/azure/cdn/cdn-troubleshoot-compression) but it doesn't help. What's next in troubleshooting?
https://learn.microsoft.com/en-us/azure/architecture/best-practices/cdn
In Azure the default mechanism is to automatically compress content when CPU utilization is below 50%.
This may/may not be your problem. There's a setting on portal.azure to enable compression on the fly, which I think would be the solution
I am consuming the ruby google-api-client v0.9.pre1, which I have recently upgraded from v0.7.1.
I have been aware that the upload time of my files from my Rails server using the ruby library was slow. However, I was uploading file by file instead of batching and I assumed this added some time. When I uplifted to 0.9.pre1 I refactored to the the batch_upload apis and I still have very slow upload times.
The last several attempts have come out to about 0.23 mb/s upload. It is taking 12-13 seconds to upload 2-3 MB. My server is hosted on a Google Compute Engine which has access to my Google Storage Bucket.
Can anyone give me an idea why it is so slow to upload files from a server within Google's hosting to Google Storage? Both AWS and Rackspace blow Google out of the water on storage upload speeds. I can't help but think I'm missing something. If not, I may head back in those directions.
Anyone getting better speeds?
Any help or ideas?
I know its a known issue but has anyone found a way to "fix" the connection failure on iPhone in 3G of "relativly" large files ?
My application depends highly on S3 for upload and keeps failing uploads of files larger then 200KB
Depends on what's causing the failure.
An easy, albeit imperfect solution is to increase the timeout on your AmazonS3Client:
s3 = [[AmazonS3Client alloc] initWithAccessKey:S3_ACCESS_KEY_ID withSecretKey:S3_SECRET_KEY];
s3.timeout = 240;
I figured this out some time ago but forgot to update the reply, actually what was happening was that i was using an HTTP connection and it seems that if uploading Media files there are some Operators that have online "Conversors" dont know how to call them that take for instance your JPEG and "optimize" that jpg for mobile devices (this also applies to other media types), and since that modified the file that wont match S3 Header with the file "HASH", the way i worked around the problem was to use an HTTPS connection which prevents those intermediary servers to modify my upload
I'm having issues with audio files on the iPhone web-app. Seems as each time an audio file is played, it's loaded first then played, even if repeating the same audio on a page that hasn't refreshed (done via javascript). From what I've research manifest files would be great but they are for offline application. I'm now researching HTML5 databases.
Does anyone know if HTML5 databases can store audio files such as mp3? The end result it then to pull the mp3 from the database. It might still have to load the file each time from the database but I'm hoping it's quicker than retrieving it from a server.
Thank you.
I think what you are after is possible, however you have a significant hurdle in that the implementation of HTML5 databases on most browsers is limited to 5mb as per w3c recommendations:
A mostly arbitrary limit of five
megabytes per origin is recommended.
Having said that the way its implemented in iPhone Safari is that databases can grow until they reach 5MB in size at which point the browser will ask the user if they wish to allow for the extra size, asking again at 10, 50, 100 and 500MB (see section "Estimated Database Size" in this post by html5doctor).
There is no limit on the number of databases you can build per domain in safari, however according to this post by Cantina Consulting you can have a total of 50MB across all databases in a single domain.
Given these parameters, a possible work-around for this implementation is to split your mp3 blobs across multiple databases, creating a new database each time your reach 4.9MB, however even if you follow this design it may not be ideal as you will still experience the following:
50MB is not a lot of audio files, a typical 5/6min song is about 5MB at 128Khz, so that only gives you space for about 1CD (60 min) of mp3 songs, after this you will need user cooperation to use additional database space.
You will still have significant security issues trying to play the mp3 blobs from the javascript runtime, it may be possible to bypass these tricking flash into thinking they are mp3 stream but I'm not sure how you'd go about it.
Feel free to have a play around with this iPhone HTML5 SQL Client I put together, you may want to use something similar for experimenting with your local mp3 Database.