How to reset FFmpeg static global variables? - iphone

Im trying to create a movie from set of PNG images using FFmpeg in iPhone. Later merging the video created with audio which is recorded separately. I can call this as a two phases of my first pass. But when I start my second pass, FFmpeg crashes in first phase. I know that this because the global variables set in first pass are not reset during the second pass. Is there any way to reset the static global variables set to the FFmpeg?
In my case I am getting error like "frame size changed to 320x400, bgra", even though the images are set to PNG before I start my second pass.

This issue got resolved now. After debugging the FFmpeg code, I found that the pixel format was not reset and it was retaining the previously set value. Fix is to reset the "frame_pix_fmt = PIX_FMT_NONE" before you start actual encoding. "frame_pix_fmt" is declared as a static global variable in ffmpeg.c.

Related

Anylogic: Dynamically change source rate using variable/slider

I am trying to dynamically change the source Arrival rate using a variable "arrivalRate" linked to a slider (see image).
However, during the simulation the initial rate remains the same, even when I change the arrivalRate. I know that the arrivalRate variable is changing successfully (it is an int) - but this has no effect on the source rate during the simulation.
Anyone have an idea what the issue is - or how to fix it?
Whenever you see the = sign before a field, it means it's not dynamic, it is only evaluated at the start of the model or at the element creation and will not change throughout the simulation run unless you force it. In other words, the variable arrivalRate is checked only once to assign the source's arrival rate and that's it.
Now if you want to change it dynamically, in the slider's Action field, write the following:
source.set_rate( arrivalRate );

MIT-Scratch : Sequential cloning without delay

I am just starting to play with this as an educational tool for a youngster and encounter strange behavior whilst attempting to clone sprites.
I setup a global variable for position x,y in sprite_1 and clone a sprite_2 object. This object immediately copies the global x,y to local x,y and exits. Later sprite_2 renders using the stored local x,y.
sprite_1:
sprite_2:
I expect the four sprites to clone diagonally up/right on the screen according to this small reproduce-able example. Instead I appear to get four sprite_2 objects all on top of each other:
If I add a delay of 1 second onto the end of the clone(x,y) function however all is well:
As all four sprite_2 objects appear to be where the last clone was placed, I have a suspicion that the clones are not created immediately but instead created as a batch all at once, at some time and therefore are all taking the last coordinates from the globals _clone_enemy_x/y.
Is this the case? is there are way to circumvent this behavior or what is the solution?
I have 2 possible solutions to this problem:
Go to the "define clone()()" block, right click it, open up the advanced dropdown, and tick "run without screen refresh".
Get rid of the custom block all together, but use the original source for that block in the actual code.
I hope this helps!

Openpanel and symbol communication not working

I am trying to make a patch that plays audio when a bang is pressed. I have put a symbol so that I don't need to keep reimporting the file. However it works sometimes but not all the time.
A warning in the Pd console reads: Start requested with no prior open
However I have imported an audio file
Is there something that I have done wrong?
Use [trigger] to get the order-of-execution correct.
One problem is, that whenever you send a [1( to [readsf~] you must have sent an [open ...( message directly beforehand.
Even if you have just successfully opened a file, but then stopped it (with [0() or played it through (so it has been closed automatically), you have to send the filename again.
The real problem is, that your messages are out of order: you should never have a fan-out (that is: connecting a message outlet to multiple inlets), as this will create undefined behavior.
Use [trigger] to get the order-of-execution correct.
(Mastering [trigger] is probably the single most important step in learning to program Pd)

GPUImageMovieWriter frame presentationTime

I have a GPUImageColorDodgeBlend filter with two inputs connected:
A GPUImageVideoCamera which is getting frames from the iPhone video camera.
A GPUImageMovie which is an (MP4) video file that I want to have laid over the live camera feed.
The GPUImageColorDodgeBlend is then connected to two outputs:
A GPUImageImageView to provide a live preview of the blend in action.
A GPUImageMovieWriter to write the movie to storage once a record button is pressed.
Now, before the video starts recording, everything works OK 100% of the time. The GPUImageVideo is blended over the live camera video fine, and no issues or warnings are reported.
However, when the GPUImageMovieWriter starts recording, things start to go wrong randomly. About 80-90% of the time, the GPUImageMovieWriter works perfectly, there are no errors or warnings and the output video is written correctly.
However, about 10-20% of the time (and from what I can see, this is fairly random), things seem to go wrong during the recording process (although the on-screen preview continues to work fine).
Specifically, I start getting hundreds & hundreds of Program appending pixel buffer at time: errors.
This error originates from the - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex method in GPUImageWriter.
This issue is triggered by problems with the frameTime values that are reported to this method.
From what I can see, the problem is caused by the writer sometimes receiving frames numbered by the video camera (which tend to have extremely high time values like 64616612394291 with a timescale of 1000000000). But, then sometimes the writer gets frames numbered by the GPUImageMovie which are numbered much lower (like 200200 with a timescale of 30000).
It seems that GPUImageWriter is happy as long as the frame values are increasing, but once the frame value decreases, it stops writing and just emits Program appending pixel buffer at time: errors.
I seem to be doing something fairly common, and this hasn't been reported anywhere as a bug, so my questions are (answers to any or all of these are appreciated -- they don't all need to necessarily be answered sequentially as separate questions):
Where do the frameTime values come from -- why does it seem so arbitrary whether the frameTime is numbered according to the GPUImageVideoCamera source or the GPUImageMovie source? Why does it alternative between each -- shouldn't the frame numbering scheme be uniform across all frames?
Am I correct in thinking that this issue is caused by non-increasing frameTimes?
...if so, why does GPUImageView accept and display the frameTimes just fine on the screen 100% of the time, yet GPUImageMovieWriter requires them to be ordered?
...and if so, how can I ensure that the frameTimes that come in are valid? I tried adding if (frameTime.value < previousFrameTime.value) return; to skip any lesser-numbered frames which works -- most of the time. Unfortunately, when I set playsAtActualSpeed on the GPUImageMovie this tends to become far less effective as all the frames end up getting skipped after a certain point.
...or perhaps this is a bug, in which case I'll need to report it on GitHub -- but I'd be interested to know if there's something I've overlooked here in how the frameTimes work.
I've found a potential solution to this issue, which I've implemented as a hack for now, but could conceivably be extended to a proper solution.
I've traced the source of the timing back to GPUImageTwoInputFilter which essentially multiplexes the two input sources into a single output of frames.
In the method - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex, the filter waits until it has collected a frame from the first source (textureInput == 0) and the second, and then forwards on these frames to its targets.
The problem (the way I see it) is that the method simply uses the frameTime of whichever frame comes in second (excluding the cases of still images for which CMTIME_IS_INDEFINTE(frameTime) == YES which I'm not considering for now because I don't work with still images) which may not always be the same frame (for whatever reason).
The relevant code which checks for both frames and sends them on for processing is as follows:
if ((hasReceivedFirstFrame && hasReceivedSecondFrame) || updatedMovieFrameOppositeStillImage)
{
[super newFrameReadyAtTime:frameTime atIndex:0]; // this line has the problem
hasReceivedFirstFrame = NO;
hasReceivedSecondFrame = NO;
}
What I've done is adjusted the above code to [super newFrameReadyAtTime:firstFrameTime atIndex:0] so that it always uses the frameTime from the first input and totally ignores the frameTime from the second input. So far, it's all working fine like this. (Would still be interested for someone to let me know why this is written this way, given that GPUImageMovieWriter seems to insist on increasing frameTimes, which the method as-is doesn't guarantee.)
Caveat: This will almost certainly break entirely if you work only with still images, in which case you will have CMTIME_IS_INDEFINITE(frameTime) == YES for your first input'sframeTime.

Canon EDSDK sample code - help to understand save file to location

I am new to the EDSDK, but so far have been very happy with the results. I have my program working just fine saving to the camera, however when I set to saveTo Host I'm unclear on where it thinks it's supposed to save to.
Everything appears to work. Callback function gets called, progress bar animates but I have no idea where it thinks it's pointing the file to.
the closest I get is finding where the #"download" command is issued, the argument to this call should be getting cast as a (EdsDirectoryItemRef)
This all seems to be coming from the EDSCALLBACK handleObjectEvent but I can't figure out how it gets constructed.
Ideally I'd like to be able to specify where on disk I want the images to go. Can someone provide some aid?
[edit]
Okay, I see the images are going into the build directory, but perhaps someone could help me to understand why. Or even better how to specify a path for myself.
When you set saveTo_Host, the image is stored on a temporary memory in the camera. The camera then triggers a DirItemRequestTransfer event that would call the callback function 'handleObjectEvent'. The reference to the image, stored in the temporary camera memory, is passed to the callback function.
Within the handleObjectEvent callback function you probably would be creating a file stream and using EdsDownload to download the file to the location on the PC (which is specified by the file stream).
When you create a file stream you need to specify a file name (the first argument). This file name determines where the image would be stored. If you just specify the file name without a path the image gets stored in the build directory. If you would like to save the file in a particular location you need to specify the file name along with its path.
Hope this helps.