Is it possible to upload a video file to IBM Cloud Functions / OpenWhisk function and encode it? - ibm-cloud

We are developing a video streaming platform. In that we want to encode video into H.264 format after uploading it.
We decided to use IBM Cloud Functions / OpenWhisk to encode the video, but having some doubts. Is it possible to upload a video file to IBM Cloud Functions / OpenWhisk and encode it? Is it supported, how can it be done?

Yes, that should be possible.
I recommend checking out this "Dark Vision" app using IBM Cloud Functions. You can upload videos which then are split into frames, the frames processed with Visual Recognition. The source code for Dark Vision is available on GitHub.
In addition you should go over the documented IBM Cloud Functions system limits to see if they match your requirements.

Related

Does using saved Google Text-to-Speech audio files violate Google Cloud Usage Terms?

My app has a list of fixed paragraphs that needs to be translated into speech. I plan to use Google's Text-to-Speech API to convert them into speech then download their audio files so that I don't need to constantly communicate with the API to translate them, considering that the paragraphs, once again, do not change.
Does this violate the Google Cloud Terms of Service restrictions?
Good news. It seems that caching synthesized audio files to avoid re-synthesization and promote cost saving is allowed with Google Text-to-Speech, as promoted by one of their use cases.

AWS Cloudfront Video Streaming - Video Quality Change

I am developing an Video-on-demand mobile application. Videos are converted using AWS Elastic media converter and stored in S3 bucket. It will be streamed using Cloudfront.
The problem I'm facing is to stream the video on different quality(720p,360p..)
If user has less data then they wish to watch video in low quality. So how to change the video quality manually ?
You can combine the solution with Lambda#Edge and request the video resolution via GET parameters in this manner (copied from AWS Blog):
Read More here .
As you stated that you need multiple resolution videos so user can switch the quality based on their internet connection and you are using AWS media convert.
You need to convert the multiple resolution videos first using the trascoding services that you are using AWS then need to pass these multiple resolution files on media player so it will show the quality section features.let me know if you need more clarification or help on that.

Is it possible to push media to Azure from web browser?

1) I'm researching the technology I can use for a browser applicaton that streams video. It should capture video from webcam and push it to service where it's stored and can be watched later. One of the (possible?) options is Azure Media Services. But after a quick look at the documentation it seems that it's not possible to use pure modern browser without plugins. Am I correct? If no, can you please give some links to github projects or an example of code to look at?
2) Another possible technology option is Amazon Kinesis Video Streams (looks lite the best solution I came up with so far), but maybe you can recommend some other cloud services?
Thanks!
Currently the short answer is no.
WebRTC is the right solution for broadcasting from a browser. That's the only protocol for live streaming that will be "somewhat" widely supported in modern browsers like latest Chrome.
AMS does not yet support receiving WebRTC. We only support RTMP and Smooth ingest right now (Chunked MP4)
As far as I'm aware, Kinesis also expects you to send chunked MKV (like chunked MP4 but a less popular container format), which would need a browser plugin or javascript library to support. I don't see any Producer library from them in Javascript.
WebRTC is your answer - but to catch that in the cloud, you may need to look at other solutions that run in an Azure Container. There are a bunch of 3rd party solutions out there for WebRTC.

IBM Watson Speech to Text and webm

Currently IBM Watson Speech to Text service supports only “ogg” compressed format. However, a new standard for WebRTC platform is “webm”. As a result, we have to either use Firefox or send huge “wav” files without compression to Bluemix from a client browser. Is it possible to make support of “webm”?
The service added support for webm on April 10th, 2017. See the release notes. Additionally, here is a list of the audio formats supported by the service.

Encoding of audio (mp3, mp4, m4a, ogg) file for smooth streaming window media services

I want to encode the audio file (mp3, mp4, m4a, ogg) for the streaming and want to play (I want to play encoded file smoothly) using the HTML5 player but I think HTML5 player.
So now what I am doing, I am uplaoding a file and econding this file on windows Azure Media Services using the preset "AAC Good Quality Audio". It encode the file with .mp4 file format and then I create SAS locator to run this file, it works well but the problem is that user can download it too which I don't want to allow.
If I create the OnDemandOrigin locator of the same encoded asset, it gives me 404 erroe. It means we can not play it.
Below are the steps that I have used to upload the file on Azure Media Services:
Created the empty assest.
Upload the file into the asset.
Then create the new task job to encode the audio file.
I have successfully encoded the file but when I try to generate the origin url it generate the url but when I browse the file I get
the error 404.
My queries:
"AAC Good Quality Audio" preset is the right for my task?
How can I restrict the user to download the file, if I use sas locator.
Is it possible to play the encoded file using origin locator.
Can I encode audio files for smooth streaming ? If I can then which player I should use to run the encoded file for all browsers, IOS devices and android devices.
If you want further details please feel free to ask me.
Awaiting your response.
Thanks
If your user is able to listen to the audio you're publishing, they will also be able to download the file. This you can not prevent. At best, you can make it difficult, but not impossible. More to the point, Media Services at its current incarnation has no way for you to do authorization of any kind, so the only tool you've got is time-bombed SAS locators.
The typical solution for this problem is to use DRM. Media Services supports PlayReady encryption, but you need to either have a PlayReady server or purchase it as a service (there is currently a service in the Azure Marketplace that provides PlayReady for a monthly price).
See following article how to protect assets with Microsoft PlayReady technology
Origin Locators are something you would use to publish a Smooth Stream or HLS asset. It is not useful for regular media files, as it is internally something equivalent to an IIS Media Services endpoint. For regular media files, you can just as well host them in Blob Storage -- and refer to them via the SAS locator.
There is currently no single format that will play across all devices and operating systems. You can get Smooth Streaming to work on most Windows and Mac computers (possibly Linux, too), either with Silverlight or with the Smooth Streaming Plugin for the Flash-based OSMF. For iOS devices you will need to encode to HLS and use the HTML5 video tag. Microsoft Media Platform will support MPEG-DASH, a recently ratified ISO/IEC standard for dynamic adaptive streaming over HTTP.More details how to use DASH preview feature can be found here
If you want smooth streaming for audio only, it looks like you will have to create a video asset with an empty video stream -- although there is a Uservoice request to add support for audio only in the future.