I'm trying to create a game in Unity where each frame is rendered into a texture and then put together into a video using FFmpeg. The output created by FFmpeg should eventually be sent over the network to a client UI. However, I'm struggling mainly with the part where a frame is caught, and passed to an unsafe method as a byte array where it should be processed further by FFmpeg. The wrapper I'm using is FFmpeg.AutoGen.
The render to texture method:
private IEnumerator CaptureFrame()
{
yield return new WaitForEndOfFrame();
RenderTexture.active = rt;
frame.ReadPixels(rect, 0, 0);
frame.Apply();
bytes = frame.GetRawTextureData();
EncodeAndWrite(bytes, bytes.Length);
}
The unsafe encoding method so far:
private unsafe void EncodeAndWrite(byte[] bytes, int size)
{
GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);
IntPtr address = pinned.AddrOfPinnedObject();
sbyte** inData = (sbyte**)address;
fixed(int* lineSize = new int[1])
{
lineSize[0] = 4 * textureWidth;
// Convert RGBA to YUV420P
ffmpeg.sws_scale(sws, inData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);
}
inputFrame->pts = frameCounter++;
if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
throw new ApplicationException("Error sending a frame for encoding!");
pkt = new AVPacket();
fixed(AVPacket* packet = &pkt)
ffmpeg.av_init_packet(packet);
pkt.data = null;
pkt.size = 0;
pinned.Free();
...
}
sws_scale takes a sbyte** as the second parameter, therefore I'm trying to convert the input byte array to sbyte** by first pinning it with GCHandle and doing an explicit type conversion afterwards. I don't know if that's the correct way, though.
Moreover, the condition if(ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0) alwasy throws an ApplicationException, where I also really don't know why this happens. codecContext and inputFrame are my AVCodecContext and AVFrame objects, respectively, and the fields are defined as the following:
codecContext
codecContext = ffmpeg.avcodec_alloc_context3(codec);
codecContext->bit_rate = 400000;
codecContext->width = textureWidth;
codecContext->height = textureHeight;
AVRational timeBase = new AVRational();
timeBase.num = 1;
timeBase.den = (int)fps;
codecContext->time_base = timeBase;
videoAVStream->time_base = timeBase;
AVRational frameRate = new AVRational();
frameRate.num = (int)fps;
frameRate.den = 1;
codecContext->framerate = frameRate;
codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;
inputFrame
inputFrame = ffmpeg.av_frame_alloc();
inputFrame->format = (int)codecContext->pix_fmt;
inputFrame->width = textureWidth;
inputFrame->height = textureHeight;
inputFrame->linesize[0] = inputFrame->width;
Any help in fixing the issue would be greatly appreciated :)
Check examples on here: https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples
Especially scaling_video.c. In FFmpeg scaling and pixel format conversion is same operation (keep the size parameters same for just pixel format conversion).
These examples very easy to follow. Give it a try.
I think your casting is incorrect sbyte** inData = (sbyte**)address;
because address is IntPtr object, so the correct casting probably should be
sbyte* pinData = (sbyte *)address.ToPointer(); sbyte** ppInData = &pinData;
Related
I'm trying to clone a mesh using GraphicsBuffer because its read/write option is false, and I can't change that. This is my current code:
public static bool Run(Mesh sourceMesh, out Mesh generatedMesh)
{
GraphicsBuffer sourceDataBuffer = sourceMesh.GetVertexBuffer(0);
GraphicsBuffer sourceIndexBuffer = sourceMesh.GetIndexBuffer();
var vertexCount = sourceDataBuffer.count;
var indexCount = sourceIndexBuffer.count;
byte[] sourceData = new byte[vertexCount * (sourceDataBuffer.stride / sizeof(byte))];
byte[] sourceIndex = new byte[(int)(indexCount * ((float)sourceIndexBuffer.stride / sizeof(byte)))];
sourceDataBuffer.GetData(sourceData);
sourceIndexBuffer.GetData(sourceIndex);
var attributes = sourceMesh.GetVertexAttributes();
generatedMesh = new Mesh();
generatedMesh.vertexBufferTarget |= GraphicsBuffer.Target.Raw;
generatedMesh.SetVertexBufferParams(vertexCount, attributes);
generatedMesh.SetVertexBufferData(sourceData, 0, 0, sourceData.Length);
generatedMesh.SetIndexBufferParams(indexCount, sourceMesh.indexFormat);
generatedMesh.SetIndexBufferData(sourceIndex, 0, 0, sourceIndex.Length);
generatedMesh.subMeshCount = sourceMesh.subMeshCount;
for (int i = 0; i < sourceMesh.subMeshCount; i++)
{
var subMeshDescriptor = sourceMesh.GetSubMesh(i);
generatedMesh.SetSubMesh(i, subMeshDescriptor);
}
sourceDataBuffer.Release();
sourceIndexBuffer.Release();
generatedMesh.RecalculateBounds();
return true; // No error
}
It works like a charm in my test project. But when I try to clone things in my main project, GetData(sourceData) and GetData(sourceIndex) both return arrays of 0s. What could be causing that? Could it be because of the read/write being disabled?
Edit: The problem only happens in bundles with non-readable meshes. I thought GraphicsBuffer should work with them, but it doesn't.
My game lets the user modify the terrain at runtime, but now I need to save said terrain. I've tried to directly save the terrain's heightmap to a file, but this takes almost up to two minutes to write for this 513x513 heightmap.
What would be a good way to approach this? Is there any way to optimize the writing speed, or am I approaching this the wrong way?
public static void Save(string pathraw, TerrainData terrain)
{
//Get full directory to save to
System.IO.FileInfo path = new System.IO.FileInfo(Application.persistentDataPath + "/" + pathraw);
path.Directory.Create();
System.IO.File.Delete(path.FullName);
Debug.Log(path);
//Get the width and height of the heightmap, and the heights of the terrain
int w = terrain.heightmapWidth;
int h = terrain.heightmapHeight;
float[,] tData = terrain.GetHeights(0, 0, w, h);
//Write the heights of the terrain to a file
for (int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
//Mathf.Round is to round up the floats to decrease file size, where something like 5.2362534 becomes 5.24
System.IO.File.AppendAllText(path.FullName, (Mathf.Round(tData[x, y] * 100) / 100) + ";");
}
}
}
As a sidenote, the Mathf.Round doesn't seem to influence the saving time too much, if at all.
You are making a lot of small individual File IO calls. File IO is always time consuming and expensive as it contains opening the file, writing to it, saving the file and closing the file.
Instead I would rather generate the complete string using e.g. a StringBuilder which is also more efficient than using something like
var someString
for(...)
{
someString += "xyz"
}
because the latter always allocates a new string.
Then use e.g. a FileStream and StringWriter.WriteAsync(string) for writing async.
Also rather use Path.Combine instead of directly concatenating string via /. Path.Combine automatically uses the correct connectors according to the OS it is used on.
And instead of FileInfo.Directory.Create rather use Directory.CreateDirectory which doesn't throw an exception if the directory already exists.
Something like
using System.IO;
...
public static void Save(string pathraw, TerrainData terrain)
{
//Get full directory to save to
var filePath = Path.Combine(Application.persistentDataPath, pathraw);
var path = new FileInfo(filePath);
Directory.CreateDirectory(path.DirectoryName);
// makes no sense to delete
// ... rather simply overwrite the file if exists
//File.Delete(path.FullName);
Debug.Log(path);
//Get the width and height of the heightmap, and the heights of the terrain
var w = terrain.heightmapWidth;
var h = terrain.heightmapHeight;
var tData = terrain.GetHeights(0, 0, w, h);
// put the string together
// StringBuilder is more efficient then using
// someString += "xyz" because latter always allocates a new string
var stringBuilder = new StringBuilder();
for (var y = 0; y < h; y++)
{
for (var x = 0; x < w; x++)
{
// also add the linebreak if needed
stringBuilder.Append(Mathf.Round(tData[x, y] * 100) / 100).Append(';').Append('\n');
}
}
using (var file = File.Open(filePath, FileMode.OpenOrCreate, FileAccess.Write))
{
using (var streamWriter = new StreamWriter(file, Encoding.UTF8))
{
streamWriter.WriteAsync(stringBuilder.ToString());
}
}
}
You might want to specify how exactly the numbers shall be printed with a certain precision like e.g.
(Mathf.Round(tData[x, y] * 100) / 100).ToString("0.00000000");
As the picture show , I need to get byte array from ByteRange to do some verify , they are 0 to 840 and 960 to 1200.
I found the similar question : In Itext 7, how to get the range stream to sign a pdf?
iText in its own verification code needs to do the same thing. It does so in its SignatureUtil class. Thus, one can simply borrow from that code, e.g. like this:
try ( PdfReader pdfReader = new PdfReader(SOURCE_PDF);
PdfDocument pdfDocument = new PdfDocument(pdfReader);) {
SignatureUtil signatureUtil = new SignatureUtil(pdfDocument);
for (String name : signatureUtil.getSignatureNames()) {
PdfSignature signature = signatureUtil.getSignature(name);
PdfArray b = signature.getByteRange();
RandomAccessFileOrArray rf = pdfReader.getSafeFile();
try ( InputStream rg = new RASInputStream(new RandomAccessSourceFactory().createRanged(rf.createSourceView(), SignatureUtil.asLongArray(b)));
OutputStream result = TARGET_STREAM_FOR_name_BYTES) {
byte[] buf = new byte[8192];
int rd;
while ((rd = rg.read(buf, 0, buf.length)) > 0) {
result.write(buf, 0, rd);
}
}
}
}
(RetrieveSignedRanges test testExtractSignedBytes)
If you want the byte range as a byte[] in memory, you can use a ByteArrayOutputStream as TARGET_STREAM_FOR_name_BYTES and retrieve the resulting byte array from it.
I'm trying to use libavcodec to encode a flv video.
Following code is a sample code to generate a mpeg video, it works well. But after replacing the codec ID with AV_CODEC_ID_FLV1, the generated video file cannot be played.
void simpleEncode(){
AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_MPEG1VIDEO);
AVCodecContext *ctx = avcodec_alloc_context3(codec);
ctx->bit_rate = 400000;
ctx->width = 352;
ctx->height = 288;
AVRational time_base = {1,25};
ctx->time_base = time_base;
ctx->gop_size = 10;
ctx->pix_fmt = AV_PIX_FMT_YUV420P;
avcodec_open2(ctx, codec, NULL);
AVFrame *frame = av_frame_alloc();
av_image_alloc(frame->data, frame->linesize, ctx->width, ctx->height, ctx->pix_fmt, 32);
frame->format = ctx->pix_fmt;
frame->height = ctx->height;
frame->width = ctx->width;
AVPacket pkt;
int got_output;
FILE *f = fopen("test.mpg", "wb");
for(int i=0; i<25; i++){
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
for(int w=0; w<ctx->width; w++){
for(int h=0; h<ctx->height; h++){
frame->data[0][h*frame->linesize[0]+w]=i*10;
}
}
for(int w=0; w<ctx->width/2; w++){
for(int h=0; h<ctx->height/2; h++){
frame->data[1][h*frame->linesize[1]+w]=i*10;
frame->data[2][h*frame->linesize[2]+w]=i*10;
}
}
frame->pts=i;
avcodec_encode_video2(ctx, &pkt, frame, &got_output);
fwrite(pkt.data, 1, pkt.size, f);
}
}
AV_CODEC_ID_FLV1 is a poorly named macro. It refers to the Sorenson H.263 codec. It at one time was the default codec for the FLV container format, but no longer is. It was replaced with VP6 and now h.264. Except for this history it has no relation to the flv container format.
An mpeg1 stream is an odd thing in that its elementary stream format is also a valid container. This is not the case for h.263. You can not simply write the packets to disk and play them back. You must encapsulate the ES into a container. The easiest way to do that is use libavformat.
I'm trying to get audio queue working on an iphone app, and whenever AudioQueueStart is called it gives the "fmt?" result code (kAudioFormatUnsupportedDataFormatError). In the code below i'm setting the format to kAudioFormatLinearPCM, which surely is supported. What am i doing wrong?
data.mDataFormat.mSampleRate = 44100;
data.mDataFormat.mFormatID = kAudioFormatLinearPCM;
data.mDataFormat.mFormatFlags = 0;
data.mDataFormat.mBytesPerPacket = 4;
data.mDataFormat.mFramesPerPacket = 1;
data.mDataFormat.mBytesPerFrame = 4;
data.mDataFormat.mChannelsPerFrame = 2;
data.mDataFormat.mBitsPerChannel = 16;
OSStatus status;
status = AudioQueueNewOutput(&data.mDataFormat, audioCallback, &data, CFRunLoopGetCurrent (), kCFRunLoopCommonModes, 0, &data.mQueue);
for (int i = 0; i < NUMBUFFERS; ++i)
{
status = AudioQueueAllocateBuffer (data.mQueue, BUFFERSIZE, &data.mBuffers[i] );
audioCallback (&data, data.mQueue, data.mBuffers[i]);
}
Float32 gain = 1.0;
status = AudioQueueSetParameter (data.mQueue, kAudioQueueParam_Volume, gain);
status = AudioQueueStart(data.mQueue, NULL);
data is of type audioData which is like this:
typedef struct _audioData {
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[NUMBUFFERS];
AudioStreamBasicDescription mDataFormat;
} audioData;
thanks
The cause of your error is actually AudioQueueNewOutput rather than AudioQueueStart.. See this related question audio streaming services failing to recognize file type
it turns out i needed to set some flags. it works with
data.mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
edit: actually, dont use kLinearPCMFormatFlagIsBigEndian, it seems that with this format it should be little endian.