Streaming MP3s from Amazon S3 - streaming

Is there a way to stream MP3s stored on Amazon S3 via a Flash widget embedded in a website, or some other method?

Yes it is. Firstly, you need to create a bucket in your S3 account which is all in lower case, is globally unique and is DNS-compatible; so for example I created a bucket called ‘media.torusknot.com’.
Then to make it all look nice you need to create a DNS CNAME entry to map a sub-domain of your site to that S3 bucket. That will allow you to access your files you upload to that S3 bucket via ‘http://media.example.com/somefile.mp3’. You do just need to set the ACLs on the files & the bucket to make sure public access is allowed.
Finally, if you want to stream video files via a Flash player from S3 to another domain, you also have to tell Flash that it’s ok for the content to be pulled in from a different domain. Create a file called ‘crossdomain.xml’ in the bucket, with these contents:
<cross-domain-policy>
<site-control permitted-cross-domain-policies="all"/>
</cross>
That allows the media to be accessed from anywhere - you can be more specific if you want but this is the simplest approach.
Related resources:
Using Amazon Web Services
Streaming Media From Amazon S3

To update the answer to this question, if you want to actually STREAM to clients, you can use Amazon Cloudfront on top of your S3 bucket (as mentioned by Rudolf). Create a "streaming distribution" in Cloudfront that points to your S3 bucket.
This will stream via RTMP (good for web and Android devices). You can use JW Player or a similar player to play the streamed files.
Cloudfront Streaming uses Adobe Flash Media Server 3.5.
There is also the ability to play secure content using signed urls.

Related

Is it possible to use Google Cloud Storage daily back up with Digital Ocean Spaces (S3 compatible)?

The sync tool takes an s3:// format address - should this work with Digital Ocean?
It seems that the more appropriate way to achieve this sync task would be the usage of the transfer tool provided by Google Cloud Platform, but instead of configuring the URL for spaces, you can create a job and specify that your transfer will be done using a URL List(via Cloud Console or Storage Transfer API)
If you have a lot of objects in your Spaces buckets, you could use the Spaces API for list bucket's content and if you prefer, use the outcome of this API to then create a transfer job using the Storage transfer API (you can take a look to the transfer spec here

I want to link multiple domains to one bucket with gcs

I want to link multiple domains to one bucket with gcs
However, in an official document, the bucket name will be the domain as it is, so it seems that you can not associate multiple domains.
Do not you know someone?
GCS does not support this directly. Instead, to accomplish this, you'd likely need to make use of Google Cloud Load Balancing with your GCS bucket as a backing store. With it, you can obtain a dedicated, static IP address which you can map several domains to, and it also allows you to map static and dynamic content under the same domain, and it allows you to swap out which bucket is being served at the same path. The main downside to it is added complexity and cost.

Google Cloud Platform - Data Distribution

I am trying to figure out a proper solution for the following:
We have a client from which we want to receive data, for instance a binary that is 200Mbytes updated daily. We want them to deposit that data file(s) onto a local server near them (Europe).
We then want to do one of the following:
We want to retrieve the data, either from a local
server where we are (China/HK), or
We can log into their European
server where they have deposited the files and pull the files directly ourselves.
QUESTIONS:
Can Google's clould platform serve as a secure, easy way to provide a cloud drive for which to store and pull the data file?
Does Google's cloud platform distribute such that files pushed onto a server in Europe will be mirrored in a server over in East Asia? (that is, where and how would this distribution model work with regard to my example.)
For storing binary data, Google Cloud Storage is a fine solution. To answer your questions:
Secure: yes. Easy: yes, in that you don't need to write different code depending on your location, but there is a caveat on performance.
Google Cloud Storage replicates files for durability and availability, but it doesn't mirror files across all bucket locations. So for the best performance, you should store the data in a bucket located where you will access it the most frequently. For example, if you create the bucket and choose its location to be Europe, transfers to your European server will be fast but transfers to your HK server will be slow. See the Google Cloud Storage bucket locations documentation for details.
If you need frequent access from both locations, you could create one bucket in each location and keep them in sync with a tool like gsutil rsync

Using Google storage bucket as on-demand pull CDN

I am trying to use the Google Cloud Storage bucket to serve static files from a web server on GCE. I see in the docs that I have to copy files manually but I am searching for a way to dynamically copy files on demand just like other CDN services. Is that possible?
If you're asking whether Google Cloud Storage will automatically and transparently cache frequently-accessed content from your web server, then the answer is no, you will have to copy files to your bucket explicitly yourself.
However, if you're asking if it's possible to copy files dynamically (i.e., programmatically) to your GCS bucket, rather than manually (e.g., via gsutil or the web UI), then yes, that's possible.
I imagine you would use something like the following process:
# pseudocode, not actual code in any language
HandleRequest(request) {
gcs_uri = computeGcsUrlForRequest(request)
if exists(gcs_uri) {
data = read(gcs_uri)
return data to user
} else {
new_data = computeDynamicData(request)
# important! serve data to user first, to ensure low latency
return new_data to user
storeToGcs(new_data) # asynchronously, don't block the request
}
}
If this matches what you're planning to do, then there are several ways to accomplish this, e.g.,
language-specific libraries (recommended)
JSON API
XML API
Note that to avoid filling up your Google Cloud Storage bucket indefinitely, you should configure a lifecycle management policy to automatically remove files after some time or set up some other process to regularly clean up your bucket.

Google Cloud Storage 'static' bucketname not available

I am currently structuring a web application to serve out segments of our database represented as html iframes. I need to host my Django app's static files (such as bootstrap) in a static file store on Google Cloud Storage in order to correctly represent the HTML elements. However, when I try to create a bucket called 'static', GCS replies with the following error:
Sorry, that name is not available. Please try a different one.
Not only that, it is not allowing me to access or modify the URI, displaying a "Forbidden" message when I attempt to.
Does anyone know how to change this default setting by Google? There is no documentation regarding this..
It seems that the bucket with the given name has been already created by someone else. You have to choose a globally unique name.
Bucket names reside in a single Google Cloud Storage namespace. As a consequence, every bucket name must be unique across the entire Google Cloud Storage namespace. If you try to create a bucket with a bucket name that is already taken, Google Cloud Storage responds with an error message.
Use another name or use the default bucket. If your app was created after the App Engine 1.9.0 release, it should have a default GCS bucket named [your-app-id].appspot.com available. You can create your static files in that bucket and mimic directory structure as follows.
[your-app-id].appspot.com/static/my-file-1