Replace audio on video with Azure Media Encoder - azure-media-services

I'm using Azure Media services to encode uploaded videos for streaming. I would like to replace the audio track with a different audio source. I'm already creating a custom preset configuration to the encoding but I haven't found a way to replace or overlay different audio. Is this possible?

First of all, I presume you are attempting to use Media Encoder Standard, and not the deprecated Azure Media Encoder.
Is the replacement audio source in sync/aligned in time with respect to the video - same starting timestamp, same duration etc?
While the overlay feature will not work in this case (it'll end up mixing the audio from the original with the replacement), you can attempt a simple workaround which requires the content to be time-aligned. I'll be able to share sample code later today.

The workaround below has three sections:
Describes the preset that does video-only encoding of the video file
Describes the preset that does audio-only encoding of the overlay/replacement file
The code block that shows how to submit a Job with two Tasks that write to the same output Asset
Video Only Encoding Preset
Save the JSON below to a suitable file, say "C:\TEMP\VideoOnly.json". I'll use a single bitrate setting as an example to keep the JSON brief
{
"Version": 1.0,
"Codecs": [
{
"KeyFrameInterval": "00:00:02",
"H264Layers": [
{
"Profile": "Auto",
"Level": "auto",
"Bitrate": 2500,
"MaxBitrate": 2500,
"BufferWindow": "00:00:05",
"Width": 1280,
"Height": 720,
"BFrames": 3,
"Type": "H264Layer",
"FrameRate": "0/1"
}
],
"Type": "H264Video"
}
],
"Outputs": [
{
"FileName": "{Basename}_{Resolution}_{Bitrate}.mp4",
"Format": {
"Type": "MP4Format"
}
}
]
}
Audio Only Encoding Preset
Save the JSON below to a suitable file, say "C:\TEMP\AudioOnly.json".
{
"Version": 1.0,
"Codecs": [
{
"Profile": "AACLC",
"Channels": 2,
"SamplingRate": 48000,
"Bitrate": 128,
"Type": "AACAudio"
}
],
"Outputs": [
{
"FileName": "{Basename}_AAC_{AudioBitrate}.mp4",
"Format": {
"Type": "MP4Format"
}
}
]
}
Encoding
The code below assumes that you have the video file uploaded as an Asset myVideoAsset and the audio file uploaded as an Asset myAudioAsset.
string videoConfig = File.ReadAllText(_presetFiles + #"C:\TEMP\VideoOnly.json");
string audioConfig = File.ReadAllText(_presetFiles + #"C:\TEMP\AudioOnly.json");
// Prepare a job with two Tasks that write to the same Asset
IJob job = _context.Jobs.Create(#"Encoding " + myVideoAsset.Name + #" and " + myAudioAsset.Name);
IMediaProcessor mediaProcessor = GetLatestMediaProcessorByName("Media Encoder Standard");
ITask videoTask = job.Tasks.AddNew("Video Task", mediaProcessor, videoConfig, TaskOptions.DoNotCancelOnJobFailure | TaskOptions.DoNotDeleteOutputAssetOnFailure);
videoTask.InputAssets.Add(myVideoAsset);
IAsset outputAsset = videoTask.OutputAssets.AddNew(myVideoAsset.Name + #" plus " + myAudioAsset.Name + # - Encoded", options: AssetCreationOptions.None, formatOption: AssetFormatOption.None);
ITask audioTask = job.Tasks.AddNew("Audio Task", mediaProcessor, audioConfig, TaskOptions.DoNotCancelOnJobFailure | TaskOptions.DoNotDeleteOutputAssetOnFailure);
audioTask.InputAssets.Add(myAudioAsset);
audioTask.OutputAssets.Add(outputAsset); // Note the re-use of outputAsset here
Console.WriteLine("Submitting transcoding job...");
job.Submit();
// Wait for job to succeed, etc.

Related

VSCode extension for quick semantic info

I am used to point the mouse and get information about certain references in Visual studio code.
Here one example, using Javascript, I point the mouse to a function reference and I get information about the function signature.
I would like to have something similar to other files.
e.g.
Take the following example, in a less popular language
module top #(
parameter NB=4
);
logic [NB /*I would like info here */ -1:0] b;
endmodule
How can I write an extension that when I point the mouse to the parameter it shows me the the declaration in a box, preferably with the same syntax highlight as it is shown in the editor.
Now there is a pull request with a sample to vscode-extension-samples.
Basically you have to write something like this
import * as vscode from 'vscode';
class SimpleHoverProvider implements vscode.HoverProvider {
public provideHover(
document: vscode.TextDocument,
position: vscode.Position,
token: vscode.CancellationToken
): vscode.Hover | null {
return new vscode.Hover(`${location.line}: ${location.character}`);
// return null; if there is no information to show
}
}
export function activate(context: vscode.ExtensionContext) {
// Use the console to output diagnostic information (console.log) and errors (console.error)
// This line of code will only be executed once when your extension is activated
console.log('Congratulations, hover-provider-sample extension is active!');
const hoverProvider = new SimpleHoverProvider();
vscode.languages.registerHoverProvider('text', hoverProvider);
}
And define languages and activation events on package.json
{
"activationEvents": [
"onLanguage:text",
"onLanguage:report"
],
"contributes": {
"languages": [
{
"id": "text",
"extensions": [
".txt"
]
},
{
"id": "report",
"extensions": [
".rpt"
]
}
]
},
}

How to Download the file which is uploaded locally

I was going through the samples of Upload Collections :
Sample from Docs
Here I tried uploading a PDF file which has uploaded and shown as :
On selection Download enables and saw the code for download is as:
onDownloadItem: function() {
var oUploadCollection = this.byId("UploadCollection");
var aSelectedItems = oUploadCollection.getSelectedItems();
if (aSelectedItems) {
for (var i = 0; i < aSelectedItems.length; i++) {
oUploadCollection.downloadItem(aSelectedItems[i], true);
}
} else {
MessageToast.show("Select an item to download");
}
},
It doesn't download any , but when i tried downloading other preexisting files, downloads successfully.
May I know why the local uploaded files doesn't download ?
Does the uploaded files needed any extra attributes ? as the existing one's have few such as:
"documentId" : "64469d2f-b3c4-a517-20d6-f91ebf85b9da",
"fileName" : "Screenshot.jpg",
"mimeType" : "image/jpg",
"thumbnailUrl" : "",
"url" : "test-resources/sap/m/demokit/sample/UploadCollection/LinkedDocuments/Screenshot.jpg",.....
But when uploaded a new these all are empty in the code:
"documentId": jQuery.now().toString(), // generate Id,
"fileName": sUploadedFile,
"mimeType": "",
"thumbnailUrl": "",
"url": "".......
I am clueless on how to download files uploaded locally , any guiding links are much appreciated ...TIA
+1 Q --> May I How generally are uploaded files saved to DB ?
I have gone through docs but couldn't find any solution , but read and found a similar one to download but no luck :
sap.ui.core.util.File.save();
When link is pressed, get base64 with a FileReader
var file = event.getSource().getParent().getFileObject();
var reader = new FileReader();
// call for file content
reader.readAsDataURL(file);
reader.onload = function (e) {
var base64 = e.target.result;
}
Then create a Blob from base64 and open it as an iFrame:
var blobUrl = URL.createObjectURL(blob);
var myWindow = window.open("");
myWindow.document.write("<iframe width='100%' height='100%' src='" + blobUrl + "'></iframe>");
myWindow.document.close();
But if you don't want to just open it in a new window you can download it as soon as you have a file content.

Asking while using SSML in Dialogflow webhook

I am trying to build an Actions on Google Agent via DialogFlow and keep getting errors when trying to ask the user a question while including ssml.
I have built the agent on DialogFlow, have logic implemented using the fulfillment webhook (implemented via the node module dialogflow-fulfillment) and have been able to test on DialogFlow successfully using the test console on the right side DialogFlow.
I therefore hooked up the DialogFlow Integrations to Google Assistant.
I first tried unsuccessfully:
const client = new WebhookClient({ req, res });
let qToSnd = 'Hi <break time=\"500ms\"/> Can I help you?';
let conv = client.conv();
conv.ask(qToSnd);
client.add(conv);
The above would work (not give errors) but would result in the question being asked while speaking out the <break> tag.
I have also tried:
conv.ask(
new Text({
text: _stripTags(qToSnd),
ssml: qToSnd
}));
However, when I test this using the Actions on Google simulator I get the error message:
[Agent] isn't responding right now. Try again soon.
Digging into the logs viewer shows the following error message:
MalformedResponse: ErrorId: ... Failed to parse Dialogflow response into AppResponse because of invalid platform response. : Could not find a RichResponse or SystemIntent in the platform response for agentId: ... and intentId: ...
My fulfillment API is returning:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"text": "Hi - Can I help you?",
"ssml": "Hi <break time=\"500ms\"/> Can I help you?"
}
]
}
}
}
}
I will appreciate any pointers in the right direction.
Looking at the JSON snippet for a simple response in the documentation, you should wrap your item in a simpleResponse element. Additionally, the keys you are using for text and audio responses are incorrect, and should be textToSpeech and displayText.
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Howdy, this is GeekNum. I can tell you fun facts about almost any number, my favorite is 42. What number do you have in mind?",
"displayText": "Howdy! I can tell you fun facts about almost any number. What do you have in mind?"
}
}
]
}
}
}
}
Inspired by #NickFelker's answer below and researching more into this topic, I was able to get the SSML working by making sure to add the <speak> tags. So this works:
const client = new WebhookClient({ req, res });
let qToSnd = 'Hi <break time=\"500ms\"/> Can I help you?';
let conv = client.conv();
conv.ask('<speak>' + qToSnd + '</speak>');
client.add(conv);
The fulfillment API returns:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "<speak>Hi <break time=\"500ms\"/> Can I help you</speak>"
}
}
]
}
}
}
}

Corrupt file when using Azure Functions External File binding

I'm running a very simple ExternalFileTrigger scenario in Azure Functions were I copy one created image file from one onedrive directory to another.
function.json
{
"bindings": [
{
"type": "apiHubFileTrigger",
"name": "input",
"direction": "in",
"path": "Bilder/Org/{name}",
"connection": "onedrive_ONEDRIVE"
},
{
"type": "apiHubFile",
"name": "$return",
"direction": "out",
"path": "Bilder/Minimized/{name}",
"connection": "onedrive_ONEDRIVE"
}
],
"disabled": false
}
run.csx
using System;
public static string Run(string input, string name, TraceWriter log)
{
log.Info($"C# File trigger function processed: {name}");
return input;
}
Every things seems to work well BUT the new output image file i corrupt. The size is almost twice as big.
When looking at the encoding the original file is in ANSI but the new generated file from Azure Functions is in UTF-8.
It's working fine when I'm using a text file when source encoding is UTF-8.
Is it possible to force Azure binding ExternalFileTrigger to use ANSI? Or how to solve this?
UPDATE 2019: The external file binding seems to be deprecated from the current version of Azure Functions.
If you want to copy the file as-is, or do more fine-grained binary operations on the file contents, I recommend using Stream type instead of string for your input and output bindings:
public static async Task Run(Stream input, Stream output, string name, TraceWriter log)
{
using (MemoryStream ms = new MemoryStream())
{
input.CopyTo(ms);
var byteArray = ms.ToArray();
await output.WriteAsync(byteArray, 0, byteArray.Length);
}
log.Info($"C# File trigger function processed: {name}");
}
Change the ouput binding in function.json:
"name": "output",
This function will do exact binary copy of the file, without conversion.
You can see which other types you can use for bindings in External File bindings (see "usage" sections).

Ink Filepicker convert - The FPFile could not be converted with the requested parameters

I have an application setup in Filepicker. This application uploads directly to my S3 bucket. The initial pickAndStore() function works well. The follow up convert function always fails with the 403 error "The FPFile could not be converted with the requested parameters". I have the following code:
try {
filepicker.setKey(apiKey);
filepicker.pickAndStore(
{
extensions : [ '.jpg','.jpeg','.gif','.png' ],
container : 'modal',
services : [ 'COMPUTER', 'WEBCAM', 'PICASA', 'INSTAGRAM', 'FACEBOOK', 'DROPBOX' ],
policy : policy,
signature : signature,
},
{
location : 'S3',
multiple : false,
path : path,
},
function(InkBlobs){
filepicker.convert(
InkBlobs[0],
{
width : 150,
height : 150,
fit : 'max',
align : 'faces',
format : 'png',
policy : policy,
signature : signature,
},
{
location : 'S3',
path : response.path + fileName + '.png',
},
function(InkBlob) {
console.log(InkBlob);
},
function(FPError) {
console.log(FPError);
}
);
},
function(InkBlobs){
console.log(JSON.stringify(InkBlobs));
}
);
} catch (e) {
console.log(e.toString());
}
The error handler function is always called. The raw POST response is...
"Invalid response when trying to read from
http://res.cloudinary.com/filepicker-io/image/fetch/a_exif,c_limit,f_png,g_face,h_150,w_150/https://www.filepicker.io/api/file/"
...with the rest of my credentials appended. The debug handler returns the previously mentioned message with the moreInfo parameter pointing to a URL "https://developers.filepicker.io/answers/jsErrors/142" which has no content on it about the error.
I thought the problem might be that using S3 directly means the file is not present on the Filepicker system to convert. I tried using the standard pick() function without any S3 uploading and then converting the resulting InkBlob. That produced exactly the same error message.
Any help would be appreciated.
In this instance, the error is in the use of faces and fit max. When using faces, you can only set fit to crop.
The interpretation in the command above is to find the faces, but set the image to fit the max allowed size.
Try removing the "path" option in the policy.
Specifying the path in the policy works well for pickAndStore(), though if you specify a path in your policy for convert, filepicker will give you a 403 error adressing the conversion parameters. Seems like the API won't know if it's the source or destination path.