Azure Media Service - azure-media-services

Requirement: Upload video(s) on Azure portal. Also generate streaming URL after video encoding.
Code was written in Java as per this page found on their website - https://learn.microsoft.com/en-us/azure/media-services/media-services-java-how-to-use
but getting nothing in list when I try to retrieve list of AssetFileInfo
ListResult assetFiles = mediaService.list(AssetFile.list(asset.getAssetFilesLink()));
And due to this AssetFileInfo streamingAssetFile remains null and can't generate streaming URL as it is giving null pointer exception.
return originLocator.getPath() + streamingAssetFile.getName() + "/manifest";
Please assist. I am getting below error;
java.lang.NullPointerException
at com.zensar.azure.storage.blob.migration.MediaServices.getStreamingOriginLocator(MediaServices.java:212)

I just uploaded an mp4 file using the Azure portal. Then, I encoded the file and got the streaming URL as described in the article (https://learn.microsoft.com/en-us/azure/media-services/media-services-java-how-to-use).
The only change I made is commented out the following line:
// AssetInfo uploadAsset = uploadFileAndCreateAsset("BigBuckBunny.mp4");
And used this instead:
ListResult outputAssets = mediaService.list(Asset.list());

Related

Attempting a Google Drive partial Download (Flutter) throws a header error

Here's my issue :
I am creating a small application based on audio files stored on Google Drive, in Flutter.
I am using the drive api to make my requests, with these scopes in my google sign in :
GoogleSignIn _googleSignIn = GoogleSignIn(
scopes: [
'email',
'https://www.googleapis.com/auth/userinfo.profile',
'https://www.googleapis.com/auth/contacts.readonly',
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/docs',
'https://www.googleapis.com/auth/drive.appdata',
],
);
I have an auth element and handle signing in and out. Until then, no issues.
I can also request my files with an implementation looking like this :
var api = widget.api.getAPI();
var files = await api.files.list($fields: '*');
This works perfectly, and so does :
var api = widget.api.getAPI();
var files = await api.files.get("myFileId"); (//does get a file instance)
But since I'd like to retrieve some of the Metadata included in my audio files, and since the drive API doesn't natively support extracting audio metadata and sending it as a google metadata, I thought I'd extract it with a partial download on the file itself.
Here's the catch : I can't seem to get the partial download to work.
Based on the doc, I thought the implementation would look something like this :
import 'package:googleapis/drive/v3.dart' as ga;
(...)
try {
var partiallyDownloadedFile = await api.files.get(
"myFileIdHere",
downloadOptions: ga.PartialDownloadOptions(ga.ByteRange(0, 10))); //should get a ga.Media instance
print("partial download succeeded");
print(partiallyDownloadedFile);
//(...do stuff...)
return;
} catch (err) {
print('Error occured : ');
print(err);
return;
}
But this always throws this error :
ApiRequestError(message: Attempting partial download but got invalid
'Content-Range' header (was: null, expected: bytes 0-10/).)
I tried using it on Wav files, but also MP4 files. The error is always the same, which leads me to believe it's my implementation that's somehow wrong, but I'm not sure what I'm supposed to do to fix it. Is it my request missing the header ? The response not including it ?
While very clear, that error doesn't help me troubleshoot my issue at all. I can't seem to find any documentation on how to conduct a partial media request. I haven't found any example projects to compare it with.
PartialDownloadOptions does not have much documentation.
I could handmake a partial request through the download links (which is how I can read the music to begin with) but the drive API supposedly allows this. Could anyone familiar with Flutter/the google APIs help me correct my implementation?
EDIT : This was due to an error within the commons library in the Dart google APIs, and was (at the very least superficially) fixed thanks to Kevmoo's efforts : https://github.com/google/googleapis.dart/issues/462
It was a Content-Range error happening due to browser specifications with access-control-expose-header compared to iOS/Android-type requests that typically expose every header.

Azure Media Services - Download Transient Error

I have a lot of audios in my database whose URLs are like:
https://mystorage.blob.core.windows.net/mycontainer/uploaded%2F735fe9dc-e568-4920-a3ed-67230ce01991%2F5998d1f8-1795-4776-a19c-f1bc4a0d4786%2F2020-08-13T13%3A09%3A13.0996703Z?sv=2020-02-10&se=2022-01-05T16%3A58%3A50Z&sr=b&sp=r&sig=hQBPyOE92%2F67MqU%2Fe5V2NsqGzgPxogVeXQT%2BOlvbayw%3D
I am using these URLs as my JobInput, and submitting a encoding job, because I want to migrate the audios distribution to a streaming approach.
However, every time I use this kind of URL, it fails with DownloadTransientError, and a message something like while trying to download the input files, the files were not acessible.
If I manually upload a file to the blob storage with a simpler URL (https://mystorage.blob.core.windows.net/mycontainer/my-audio.wav), and use it as the JobInput, it works seamlessly. I suspect it has something to do with the special characters on the bigger URL, but I am not sure. What could be the problem?
Here is the part of the code that submits the job:
var jobInput = new JobInputHttp(new[]
{
audio.AudioUrl.ToString()
});
JobOutput[] jobOutput =
{
new JobOutputAsset(outputAssetName),
};
var job = await client.Jobs.CreateAsync(
resourceGroupName: _azureMediaServicesSettings.ResourceGroup,
accountName: _azureMediaServicesSettings.AccountName,
transformName: TransformName,
jobName: jobName,
new Job
{
Input = jobInput,
Outputs = jobOutput
});
You need to include the file name in the URL you're providing. I'll use your URL as an example, but unescape it as well so that it is more clear. The URL should be something like https://mystorage.blob.core.windows.net/mycontainer/uploaded/735fe9dc-e568-4920-a3ed-67230ce01991/5998d1f8-1795-4776-a19c-f1bc4a0d4786/2020-08-13T13:09:13.0996703Z/my-audio.wav?sv=2020-02-10&se=2022-01-05T16:58:50Z&sr=b&sp=r&sig=hQBPyOE92/67MqU/e5V2NsqGzgPxogVeXQT+Olvbayw=
Just include the actual blob name of the input video or audio file with the associated file extension.

Upload to Azure Blob using SAS and REST

I'm having trouble writing to an Azure Block Blob from C++ using a SAS (Shared Access Signature). I'm using the Blob REST API and Poco. The HTTP request returns error 404 (resource does not exist), but I can't figure out what I'm doing wrong.
I generate the SAS on the server in C# like this (seems to work fine):
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("my-blob");
container.CreateIfNotExists();
SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(40);
sasConstraints.Permissions = SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.List;
string sasContainerToken = container.GetSharedAccessSignature(sasConstraints);
return Request.CreateResponse(HttpStatusCode.OK, container.Uri + sasContainerToken);
In the Azure portal I can indeed see the Blob container being created as expected. I receive this SAS in C++ using an HTTP request. What I get looks like this (some names and signature replaced for security reasons):
https://myname.blob.core.windows.net/my-blob?sv=2012-02-12&se=2016-06-07T11%3A13%3A19Z&sr=c&sp=wl&sig=%%%%%%%%%%%%%%%%%%%%%%%
Then I try to create the file using Poco and the Blob REST API. That looks like this:
std::string cloudUrl = sasURI + "&restype=container";
std::string fileName = "fname.ext";
Poco::URI* uri = new Poco::URI(cloudUrl.c_str());
std::string* path = new std::string(uri->getPathAndQuery());
Poco::Net::HTTPSClientSession* session = new Poco::Net::HTTPSClientSession(uri->getHost(), uri->getPort());
std::string method = Poco::Net::HTTPRequest::HTTP_PUT;
Poco::Net::HTTPRequest* request = new Poco::Net::HTTPRequest(method, *path, Poco::Net::HTTPMessage::HTTP_1_1);
request->add("x-ms-blob-content-disposition", "attachment; filename=\"" + fileName + "\"");
request->add("x-ms-blob-type", "BlockBlob");
request->add("x-ms-meta-m1", "v1");
request->add("x-ms-meta-m2", "v2");
Poco::Net::HTTPResponse* httpResponse = new Poco::Net::HTTPResponse();
int fileContent = 42;
request->setContentLength(sizeof(int));
request->setKeepAlive(true);
std::ostream& outputStream = session->sendRequest(*request);
outputStream << fileContent;
std::istream &is = session->receiveResponse(*httpResponse);
Poco::Net::HTTPResponse::HTTPStatus status = httpResponse->getStatus();
std::ostringstream outString;
Poco::StreamCopier::copyStream(is, outString);
if (status != Poco::Net::HTTPResponse::HTTP_OK)
{
Logger::log("Connection failed\nstatus:", status, "\nreason:", httpResponse->getReason(), "\nreasonForStatus:", httpResponse->getReasonForStatus(status), "\nresponseContent:", outString.str());
}
I've looked up here how the REST API works. I found here that when using a SAS I don't need to do regular authentication.
What am I doing wrong here? Why am I getting error 404?
I believe most of your code is correct, all you need to do is insert the file name in your SAS URL.
Now that I have seen this question more carefully, this is what is happening:
You're creating a SAS on a blob container (my-blob) and using this SAS to upload a file (let's call it fname.ext). However you're not including the file name in the SAS URL so Azure Storage Service is assuming that you're trying to upload a file called my-blob in a $root container so on the service side when Azure Blob Service tries to validate the SAS, it validates it against $root container. Because you created the SAS for my-blob container and Azure Service is using $root container, the SAS does not match and that's why you're getting 403 error.
What you need to do is insert the file name in your SAS URL. So your SAS URL (or Request URL) would be something like (notice that I added fname.ext there):
https://myname.blob.core.windows.net/my-blob/fname.ext?sv=2012-02-12&se=2016-06-07T11%3A13%3A19Z&sr=c&sp=wl&sig=%%%%%%%%%%%%%%%%%%%%%%%
Also, you don't need the following two lines of code:
request->add("x-ms-version", "2015-02-21");
request->add("x-ms-date", "2016-06-07");
As these are not really needed when using SAS.
I've finally figured out what was going wrong here. :)
There were two problems in the above code. The first is that the filename needed to be inserted into the URL, as Gaurav Mantri explained. This does the trick:
int indexOfQuestionMark = cloudUrl.find('?');
cloudUrl = cloudUrl.substr(0, indexOfQuestionMark) + "/" + fileName + cloudUrl.substr(indexOfQuestionMark);
The other problem is that I wasn't uploading enough bytes. sizeof(int) is 4 bytes while pushing 42 into a stream turns it into characters, making it only 2 bytes. The server then keeps waiting for the remaining 2 bytes. That makes this the correct line in the example code above:
request->setContentLength(2);
Also, it works without these three lines so I suppose they're not needed:
request->add("x-ms-blob-content-disposition", "attachment; filename=\"" + fileName + "\"");
request->add("x-ms-meta-m1", "v1");
request->add("x-ms-meta-m2", "v2");
Similarly, adding this doesn't seem needed: "&restype=container".
Finally, for writing the SharedAccessBlobPermissions.List rights aren't needed so those can be left out in SAS generation on the server side.
One possible reason for your error could be the request date being too old. You're setting the request date as Midnight UTC tonight. Azure Storage allows about 15 minutes of clock skewness. Request date/time being "too old" is one of the major reasons for this 403 error (apart from incorrect account key and expired token in case of a SAS).
This is how you're setting x-ms-date request header.
request->add("x-ms-date", "2016-06-07");
This header's value should be formatted in the following format:
request->add("x-ms-date", "Sun, 11 Oct 2009 21:49:13 GMT");
Usually in C# world, we would do a DateTime.UtcNow.ToString("R") to get the date/time in correct format.
Please change your code accordingly and see if that solves the problem.

Wowza secure Apple HTTP Live Streaming (AES-128 - external method). Player is not making the key request

I have been working on Wowza Streaming Server and while trying to secure Apple HTTP Live Streaming using AES-128 - external method I am encountering below problems :
External AES-128 method of encryption is not working for .smil files present in the sub-folder of the application's source directory. I tried to achieve it by putting the [my-stream].key in [install-dir]/keys and [install-dir]/keys/[sub-folder-name] but both the scenarios failed for me to achieve this.
playlist url is :- [wowza-server-ip]:[port]/[application-name]/[applcation-instance-name]/smil:[sub-folder]/demo.smil/playlist.m3u8
In case of mp4s present in the application's source path, the player is not calling the key url.
The sequence of calls made by the player are :-
[wowza-server-ip]:[port]/crossdomain.xml
[wowza-server-ip]:[port]/[application-name]/[applcation-instance-name]/[stream-name]/playlist.m3u8
[wowza-server-ip]:[port]/[application-name]/[applcation-instance-name]/[stream-name]/chunklist_w[wowza-session-id].m3u8
[web-server-ip]:[port]/crossdomain.xml
After this player is not calling the "key request uri" as it was supposed to call. The calls are going properly when I am using the internal AES-128 method of Encryption.
My chunklist_w[wowza-session-id].m3u8 is
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:12
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-KEY:METHOD=AES-128,URI="http://[web-server-ip]:[port]/SimpleWebServlet/key.jsp?wowzasessionid=[session-id]"
#EXTINF:9.52,
media_w[session-id]_0.ts
#EXTINF:10.4,
media_w[session-id]_1.ts
[streamname].key file in [install-dir]/keys folder is
cupertinostreaming-aes128-key: DE51A7254739C0EDF1DCE13BBB308FF0
cupertinostreaming-aes128-url: http://[web-server-ip]:[port]/SimpleWebServlet/key.jsp
jsp file to return the key is key.jsp
<%# page import="java.util.*,java.io.*" %>
<%
boolean isValid = true;
if (!isValid)
{
response.setStatus( 403 );
}
else
{
response.setHeader("Content-Type", "binary/octet-stream");
response.setHeader("Pragma", "no-cache");
String keyStr = "DE51A7254739C0EDF1DCE13BBB308FF0";
int len = keyStr.length()/2;
byte[] keyBuffer = new byte[len];
for (int i=0;i<len;i++)
keyBuffer[i] = (byte)Integer.parseInt(keyStr.substring(i*2, (i*2)+2), 16);
OutputStream outs = response.getOutputStream();
outs.write(keyBuffer);
outs.flush();
}
%>
If anybody has encountered the similar problem or has successfully implemented the external aes-128 method of wowza, kindly put some light on the issues mentioned above.
EDIT 1
Kindly ignore the 2nd point as after further analysis I found out that there is some issue with the jboss delivering the key, once it delivers the crossdomain xml to the player.
For reference to this problem kindly check : Can I call two crossdomain.xml from two different servers from my flash player?
EDIT 2
Apologies for the typo in my first point. It should be .smil rather than .mp4, I have corrected the same in my first point
I recently tried out HLS with AES128 and it worked fine. My key file was in [wowzadir]/keys/mystream.key. Looks like it is your player that does not do something right here. Which player are you using?
You can try to use wget to download some chunks and you can inspect them with VLC for example to see if the encryption was applied.

Box API 2.0: Unable to Download

I'm testing out the new API, but having no luck downloading a test image file. The file exists, is accessible through the web UI, and is retrievable using the v1.0 API.
I'm able to access the metadata ("https://api.box.com/2.0/files/{fileid}") using both commandline curl and pycurl. However, calls to "https://api.box.com/2.0/files/{fileid}/data" bring back nothing. An earlier post (5/1) received the answer that the download feature had a bug and that "https://www.box.com" should be used as the base URL in the interim. That, however, just provokes a 404.
Please advise.
You should be able to download via http://api.box.com/2.0/files/<fildID>/content ... Looks like we have a bug somewhere in the backend. Hope to have it fixed soon.
Update 11/13/2012 -- This got fixed at least a month ago. Just updated the URL to our newer format
For me it works when its /content instead of /data... python code below
import requests
fileid = str(get_file_id(filenumber))
url = https://api.box.com/2.0/files/1790744170/content
r = requests.get(url=url, headers=<HEADERS>, proxies=<PROXIES>)
infoprint("Downloading...")
filerecieved = r.content
filename = uni_get_id(fileid, "name", "file")
f = open(filename, 'w+')
infoprint("Writing...")
f.write(filerecieved)
f.close()