Bad input: the source video has an avg_frame_rate of NaN fps and r_frame_rate of 90000 fps in Azure Media Service - azure-media-services

I've tried to upload a song (.mp4) file format to media services. it has uploaded successfully but, when i tried to create encoding job then i'm getting the below mentioned error. For few files i'm getting the below error and for few files it is not. unable to identify what is the error & how to resolve this problem?
Error Msg:
Encoding task
ErrorProcessingTask : An error has occurred. Stage: ApplyEncodeCommand. Code: System.IO.InvalidDataException.
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --->
System.IO.InvalidDataException: Bad input: the source video has an avg_frame_rate of NaN fps and r_frame_rate of 90000 fps.
Code: using encoding of "H264 Multiple Bitrate 720p"
static public IAsset CreateEncodingJob(IAsset asset, string preset,string fileName)
{
IJob job = _context.Jobs.Create(preset + " encoding job");
var mediaProcessors =
_context.MediaProcessors.Where(p => p.Name.Contains("Media Encoder Standard")).ToList();
var latestMediaProcessor =
mediaProcessors.OrderBy(mp => new Version(mp.Version)).LastOrDefault();
ITask task = job.Tasks.AddNew(preset + " encoding task",
latestMediaProcessor,
preset,
Microsoft.WindowsAzure.MediaServices.Client.TaskOptions.ProtectedConfiguration);
task.InputAssets.Add(asset);
task.OutputAssets.AddNew(fileName + " " + preset,
AssetCreationOptions.None);
job.StateChanged += new
EventHandler<JobStateChangedEventArgs>(StateChanged);
job.Submit();
LogJobDetails(job.Id);
Task progressJobTask = job.GetExecutionProgressTask(CancellationToken.None);
progressJobTask.Wait();
if (job.State == JobState.Error)
{
throw new Exception("\nExiting method due to job error.");
}
return job.OutputMediaAssets[0];
}
Can any one help me on this?

Found the solution: Click here
Re posting the comment:
Your encode tasks are failing because the nominal frame rate reported by the input video is either too high or too low. You will have to override the output frame rate setting in the encoding preset. Suppose you know that the input videos have been recorded at 30 frames/second, then:
Take the JSON for "H264 Multiple Bitrate 720p" from https://msdn.microsoft.com/en-us/library/azure/mt269953.aspx
Edit/replace each "FrameRate": "0/1" entry with "FrameRate": "30/1". Note that there are multiple entries to be replaced.
Save the resultant JSON
When submitting an encode Task, in CreateEncodingTask, replace the string "preset" with the entire JSON (by using System.IO.File.ReadAllText("song.Json"))
Regards,
Dilip.

Related

Google Cloud Storage atomic creation of a Blob

I'm using haddop-connectors
project for writing BLOBs to Google Cloud Storage.
I'd like to make sure that a BLOB with a specific target name that is being written in a concurrent context is either written in FULL or not appearing at all as visible in case that an exception has occurred while writing.
In the code below, in case that that an I/O exception occurs, the BLOB written will appear on GCS because the stream is being closed in finally:
val stream = fs.create(path, overwrite)
try {
actions.map(_ + "\n").map(_.getBytes(UTF_8)).foreach(stream.write)
} finally {
stream.close()
}
The other possibility would be to not close the stream and let it "leak" so that the BLOB does not get created. However this is not really a valid option.
val stream = fs.create(path, overwrite)
actions.map(_ + "\n").map(_.getBytes(UTF_8)).foreach(stream.write)
stream.close()
Can anybody share with me a recipe on how to write to GCS a BLOB either with hadoop-connectors or cloud storage client in an atomic fashion?
I have used reflection within hadoop-connectors to retrieve an instance of com.google.api.services.storage.Storage from the GoogleHadoopFileSystem instance
GoogleCloudStorage googleCloudStorage = ghfs.getGcsFs().getGcs();
Field gcsField = googleCloudStorage.getClass().getDeclaredField("gcs");
gcsField.setAccessible(true);
Storage gcs = (Storage) gcsField.get(googleCloudStorage);
in order to have the ability to make a call based on an input stream corresponding to the data in memory.
private static StorageObject createBlob(URI blobPath, byte[] content, GoogleHadoopFileSystem ghfs, Storage gcs)
throws IOException
{
CreateFileOptions createFileOptions = new CreateFileOptions(false);
CreateObjectOptions createObjectOptions = objectOptionsFromFileOptions(createFileOptions);
PathCodec pathCodec = ghfs.getGcsFs().getOptions().getPathCodec();
StorageResourceId storageResourceId = pathCodec.validatePathAndGetId(blobPath, false);
StorageObject object =
new StorageObject()
.setContentEncoding(createObjectOptions.getContentEncoding())
.setMetadata(encodeMetadata(createObjectOptions.getMetadata()))
.setName(storageResourceId.getObjectName());
InputStream inputStream = new ByteArrayInputStream(content, 0, content.length);
Storage.Objects.Insert insert = gcs.objects().insert(
storageResourceId.getBucketName(),
object,
new InputStreamContent(createObjectOptions.getContentType(), inputStream));
// The operation succeeds only if there are no live versions of the blob.
insert.setIfGenerationMatch(0L);
insert.getMediaHttpUploader().setDirectUploadEnabled(true);
insert.setName(storageResourceId.getObjectName());
return insert.execute();
}
/**
* Helper for converting from a Map<String, byte[]> metadata map that may be in a
* StorageObject into a Map<String, String> suitable for placement inside a
* GoogleCloudStorageItemInfo.
*/
#VisibleForTesting
static Map<String, String> encodeMetadata(Map<String, byte[]> metadata) {
return Maps.transformValues(metadata, QuickstartParallelApiWriteExample::encodeMetadataValues);
}
// A function to encode metadata map values
private static String encodeMetadataValues(byte[] bytes) {
return bytes == null ? Data.NULL_STRING : BaseEncoding.base64().encode(bytes);
}
Note in the example above, that even if there are multiple callers trying to create a blob with the same name in parallel, ONE and only ONE will succeed in creating the blob. The other callers will receive 412 Precondition Failed.
GCS objects (blobs) are immutable 1, which means they can be created, deleted or replaced, but not appended.
The Hadoop GCS connector provides the HCFS interface which gives the illusion of appendable files. But under the hood, it is just one blob creation, GCS doesn't know if the content is complete or not from the application's perspective, just as you mentioned in the example. There is no way to cancel a file creation.
There are 2 options you can consider:
Create a temp blob/file, copy it to the final blob/file, then delete the temp blob/file, see 2. Note that there is no atomic rename operation in GCS, rename is implemented as copy-then-delete.
If your data fits into the memory, first read up the stream and buffer the bytes in memory, then create the blob/file, see 3.
GCS connector should also work with the 2 options above, but I think GCS client library gives you more control.

Transfering large data (>100 MB) over Mirror in Unity

string SerializedFileString contains a serialized file, potentially hundreds of MB in size. The server tries to copy it into the client's local string ClientSideSerializedFileString. Too good to be true, which is why it throws an exception. Is there a Mirror-way to do this?
[TargetRpc]
private void TargetSendFile(NetworkConnection target, string SerializedFileString)
{
if (!hasAuthority) { return; }
ClientSideSerializedFileString = SerializedFileString;
}
ArgumentException The output byte buffer is too small to contain the encoded data, encoding 'Unicode (UTF-8)' fallback 'System.Text.EncoderExceptionFallback'.

DxlImporter inside a loop throws error " DXL importer operation failed"

I am having a java agent which loops through the view and gets the attachment from each document, The attachment is nothing but the .dxl file containing the document xml data. I am extracting the file at some temp directory and trying import the extracted .dxl as soon as it get extracted.
But the problem here is ,it only imports or works on first document's attachment in the loop and throws the error in java debug console
NotesException: DXL importer operation failed
at lotus.domino.local.DxlImporter.importDxl(Unknown Source)
at JavaAgent.NotesMain(Unknown Source)
at lotus.domino.AgentBase.runNotes(Unknown Source)
at lotus.domino.NotesThread.run(Unknown Source)
My java Agent code is
public class JavaAgent extends AgentBase {
static DxlImporter importer = null;
public void NotesMain() {
try {
Session session = getSession();
AgentContext agentContext = session.getAgentContext();
// (Your code goes here)
// Get current database
Database db = agentContext.getCurrentDatabase();
View v = db.getView("DXLProcessing_mails");
DocumentCollection dxl_tranfered_mail = v.getAllDocumentsByKey("dxl_tranfered_mail");
Document dxlDoc = dxl_tranfered_mail.getFirstDocument();
while(dxlDoc!=null){
RichTextItem rt = (RichTextItem) dxlDoc.getFirstItem("body");
Vector allObjects= rt.getEmbeddedObjects();
System.out.println("File name is "+ allObjects.get(0));
EmbeddedObject eo = dxlDoc.getAttachment(allObjects.get(0).toString());
if(eo.getFileSize()>0){
eo.extractFile(System.getProperty("java.io.tmpdir") + eo.getName());
System.out.println("Extracted File to "+System.getProperty("java.io.tmpdir") + eo.getName());
String filePath = System.getProperty("java.io.tmpdir") + eo.getName();
Stream stream = session.createStream();
if (stream.open(filePath) & (stream.getBytes() >0)) {
System.out.println("In If"+System.getProperty("java.io.tmpdir"));
importer = session.createDxlImporter();
importer.setDocumentImportOption(DxlImporter.DXLIMPORTOPTION_CREATE);
System.out.println("Break Point");
importer.importDxl(stream,db);
System.out.println("Imported Sucessfully");
}else{
System.out.println("In else"+stream.getBytes());
}
}
dxlDoc = dxl_tranfered_mail.getNextDocument();
}
} catch(Exception e) {
e.printStackTrace();
}
The code executes till it prints "Break Point" and throws the error but the attachment get imported for first time
In other case if i hard code the filePath for the specific dxl file from file system it imports the dxl as document in the database with no errors
I am wondering if it is the issue of the stream passed doesn't get completes and the next loop executes.
Any kind of suggestion will be helpful.
I can't see any part where your while loop would move on from the first document.
Usually you would have something like:
Document nextDoc = dxl_tranfered_mail.getNextDocument(dxlDoc);
dxlDoc.recycle();
dxlDoc = nextDoc;
Near the end of the loop to advance it to the next document. As your code currently stands it looks like it would never advance, and always be on the first document.
If you do not know about the need to 'recycle' domino objects I suggest you have a search for some blog posts articles that explain the need to do so.
It is a little complicated but basically, the Java Objects are just a 'wrapper' for the the objects in the C API.
Whenever you create a Domino Object (such as a Document, View, DocumentCollection etc.) a memory handle is allocated in the underlying 'C' layer. This needs to be released (or recycled) and it will eventually do so when the session is recycled, however when your are processing in a loop it is much more important to recycle as you can easily exhaust the available memory handles and cause a crash.
Also it's possible you may need to close (and recycle) each Stream after you a finished importing each file
Lastly, double check that the extracted file that is causing an exception is definitely a valid DXL file, it could simply be that some of the attachments are not valid DXL and will always throw an exception.
you could put a try/catch within the loop to handle that scenario (and report the problem files), which will allow the agent to continue without halting

Entity framework extended throws DynamicProxy exception

When trying to do bulk updates using EntityFramework.Extended I get one of two exceptions.
Looking at the example I tried:
context.ProcessJobs.Where(job => true).Update(job => new ProcessJob
{
Status = ProcessJobStatus.Processing,
StatusTime = DateTime.Now,
LogString = "Processing"
});
I got the following exception:
'EntityFramework.Reflection.DynamicProxy' does not contain a definition for 'InternalQuery'
...
System.Core.dll!System.Dynamic.UpdateDelegates.UpdateAndExecute1(System.Runtime.CompilerServices.CallSite site, object arg0) + 0x153 bytes
EntityFramework.Extended.dll!EntityFramework.Extensions.ObjectQueryExtensions.ToObjectQuery(System.Linq.IQueryable query) + 0x2db bytes
EntityFramework.Extended.dll!EntityFramework.Extensions.BatchExtensions.Update(System.Linq.IQueryable source, System.Linq.Expressions.Expression> updateExpression) + 0xe9 bytes
EntityFramework.Extended.dll!EntityFramework.Extensions.BatchExtensions.Update(System.Linq.IQueryable source, System.Linq.Expressions.Expression> updateExpression) + 0xe9 bytes
Based on a github issue, I tried :
var c = ((IObjectContextAdapter) context).ObjectContext.CreateObjectSet<ProcessJob>();
c.Update(job => new ProcessJob
{
Status = ProcessJobStatus.Processing,
StatusTime = DateTime.Now,
LogString = "Processing"
});
Which results in the exception (probably same error as reported here)
'EntityFramework.Reflection.DynamicProxy' does not contain a definition for 'EnsureMetadata'
...
EntityFramework.Extended.dll!EntityFramework.Mapping.ReflectionMappingProvider.FindMappingFragment(System.Collections.Generic.IEnumerable itemCollection, System.Data.Entity.Core.Metadata.Edm.EntitySet entitySet) + 0xc1e bytes
EntityFramework.Extended.dll!EntityFramework.Mapping.ReflectionMappingProvider.CreateEntityMap(System.Data.Entity.Core.Objects.ObjectQuery query) + 0x401 bytes
EntityFramework.Extended.dll!EntityFramework.Mapping.ReflectionMappingProvider.GetEntityMap(System.Data.Entity.Core.Objects.ObjectQuery query) + 0x58 bytes
EntityFramework.Extended.dll!EntityFramework.Mapping.MappingResolver.GetEntityMap(System.Data.Entity.Core.Objects.ObjectQuery query) + 0x9f bytes
EntityFramework.Extended.dll!EntityFramework.Extensions.BatchExtensions.Update(System.Linq.IQueryable source, System.Linq.Expressions.Expression> updateExpression) + 0x1c8 bytes
I tried the latest version for EF5, and I upgraded to EF6 to see if the latest version works, but I get the same problem. We use Code First.
I am not sure how to proceed, I've started trying to understand how the EntityFramework.Extensions code works. But I am wondering whether I will have to fall back to using a stored procedure or SQL, which neither are ideal for our setup.
Does anyone know what these problems are, or have any ideas about how to work out what is going on?
It turns out that you can ignore this error. I had CLR runtime exceptions debug option turned on. I followed through the source code, and then downloaded it and started debugging.
It seems that the exception being thrown initially is expected and it retries with some other options. Unfortunately I didn't have time to look into the exact problem because I ran into another - but that's the subject of a different question.

"Forbidden" error when uploading file through Google Cloud Storage API

I am using the "google-api-services-storage-v1beta2-rev5-java-1.15.0-rc.zip" Google Cloud Storage library together with the "StorageSample.java" sample program from here
I have followed the sample program's setup instructions and have set up the "client_secrets.json" and "sample_settings.json" files. The sample program compiles OK but runs only partially OK.
I have modified the "uploadObject" method of the "StorageSample.java" program so that it uploads a test file created by me (rather than upload a randomly generated file). The program runs OK in the following methods :
tryCreateBucket();
getBucket();
listObjects();
getObjectMetadata();
However, when running the "uploadObject(true)" method, I get the following error
================== Uploading object. ==================
Forbidden
My modified "uploadObject" method is listed below :
private static void uploadObject(boolean useCustomMetadata) throws IOException {
View.header1("Uploading object.");
File file = new File("My_test_upload_file.txt");
if (!file.exists() || !file.isFile()) {
System.out.println("File does not exist");
System.exit(1);
}
InputStream inputStream = new FileInputStream(file);
long byteCount = file.length();
InputStreamContent mediaContent = new InputStreamContent("application/octet-stream", inputStream);
mediaContent.setLength(byteCount);
StorageObject objectMetadata = null;
if (useCustomMetadata) {
List<ObjectAccessControl> acl = Lists.newArrayList(); // empty acl (seems default acl).
objectMetadata = new StorageObject()
.setName("myobject")
.setMetadata(ImmutableMap.of("key1", "value1", "key2", "value2"))
.setAcl(acl)
.setContentDisposition("attachment");
}
Storage.Objects.Insert insertObject = storage.objects().insert("mybucket", objectMetadata, mediaContent);
if (!useCustomMetadata) {
insertObject.setName("myobject");
}
if (mediaContent.getLength() > 0 && mediaContent.getLength() <= 2 * 1000 * 1000 /* 2MB */) {
insertObject.getMediaHttpUploader().setDirectUploadEnabled(true);
}
insertObject.execute();
}
In the 1st run of the program, a bucket is created and I get the "Forbidden" error when uploading my created test file. In subsequent runs, the "Forbidden" errors persist.
I think that as the bucket is created by the program, the program should have enough access right to upload a file to that bucket.
Is there any setup / operation that I have missed ? Thanks for any suggestion.
Oh, what a careless mistake. I have forgotten to change the "mybucket" name to my created bucket's name.
The program now runs OK.