Real time video processing from camera - matlab

I have a problem with real time video processing.
I want to access consequent frames (previous, current & next).
...
trigger(RawVideo)
cur_frame=getdata(RawVideo,1,'uint8');
imshow(cur_frame);
...
I can't access previous or next frame from this code though.
Please help me.

If you're processing in "real time" you won't have access to the next frame. As for the previous frame, you'll have to create a buffer to store it in so when you process the current frame you may reference it. You could also make two buffers (the last two previous frames) and call the first previous frame "current", the current frame "next", and the second previous frame "previous". Maybe that helps?

Related

MIT-Scratch : Sequential cloning without delay

I am just starting to play with this as an educational tool for a youngster and encounter strange behavior whilst attempting to clone sprites.
I setup a global variable for position x,y in sprite_1 and clone a sprite_2 object. This object immediately copies the global x,y to local x,y and exits. Later sprite_2 renders using the stored local x,y.
sprite_1:
sprite_2:
I expect the four sprites to clone diagonally up/right on the screen according to this small reproduce-able example. Instead I appear to get four sprite_2 objects all on top of each other:
If I add a delay of 1 second onto the end of the clone(x,y) function however all is well:
As all four sprite_2 objects appear to be where the last clone was placed, I have a suspicion that the clones are not created immediately but instead created as a batch all at once, at some time and therefore are all taking the last coordinates from the globals _clone_enemy_x/y.
Is this the case? is there are way to circumvent this behavior or what is the solution?
I have 2 possible solutions to this problem:
Go to the "define clone()()" block, right click it, open up the advanced dropdown, and tick "run without screen refresh".
Get rid of the custom block all together, but use the original source for that block in the actual code.
I hope this helps!

libspotify C sending zeros at the end of track

I'm using libspotify SDK, C library for win32.
I think to have a right setup, every session callback is registered. I don't understand why i can't receive the call for end_of_track, while music_delivery continues to be called with zero padding 22050 long frames.
I attempt to start playing first loading the track with sp_session_load; till it returns SP_ERROR_IS_LOADING I post a message on my message queue (synchronization method I've used, PostMessage win32 API) in order to reload again with same API sp_session_load. As soon as it returns SP_ERROR_OK I use the sp_session_play and the music_delivery starts immediately, with correct frames.
I don't know why at the end of track the libspotify runtime then start sending zero padded frames, instead of calling end_of_track callback.
In other conditions it works perfectly: I've used the sp_track obtained from a album browse, so the track is fully loaded at the moment I load to the current session for playing: with this track, it works fine with end_of_track called correctly. In the case with padding error, I search the track using its Spotify URI and got the results; in this case the track metadata are not still ready (at the play attempt) so I used that kind of "polling" on sp_session_load with PostMessage.
Can anybody help me?
I ran into the same problem and I think the issue was that I was consuming the data too fast without giving other threads time to do any work since I was spending all of my time in the music_delivery callback. I found that if I add some throttling and notify the main thread that it can wake up to do some processing, the extra zeros at the end of track is reduced to one delivery of 22,050 frames (or 500ms at 44.1kHz).
Here is an example of what I added to my callback, heavily borrowed from the jukebox.c example provided with the SDK:
/* Buffer 1 second of data, then notify the main thread to do some processing */
if (g_throttle > format->sample_rate) {
pthread_mutex_lock(&g_notify_mutex);
g_notify_do = 1;
pthread_cond_signal(&g_notify_cond);
pthread_mutex_unlock(&g_notify_mutex);
// Reset the throttle counter
g_throttle = 0;
return 0;
}
As I said, there was still 22,050 frames of zeros delivered before the track stopped, but I believe libspotify may purposely do this to ensure that the duration calculated by the number of frames received (song_duration_ms = total_frames_delivered / sample_rate * 1000) is greater than or equal to the duration reported by sp_track_duration. In my case, the track I was trying to stream was 172,000ms in duration, without the extra padding the duration calculated is 171,796ms, but with the padding it was 172,296ms.
Hope this helps.

Editing Timeline from CCB file in cocos

I did some research into this and couldn't really find anything, so if this is a repetitive question I apologize. but anyway I have made a CCB file in CocosBuilder and I would like to start the timeline, for example, at one second instead of playing from the beginning. Is there a way to do this? Thanks for the help guys.
Edit: i would like this to be done in the code.
I am using 2.2.1 Cocos2DX version. I think there is no option to play it from given interval. But you can tweak yourself to get it done. (Not simple one)
You have to go to CCBAnimationManager and there you get "mNodeSequences".
It is dictionary and you get difference properties there like "rotation position etc..."
values there.
Internally AnimationManager reads this value (These values are specified in your CCB)
and puts in runAction queue.
So you have to break it as you want.(Ex. 5 min timeline you have. But you want to start
from 1 min then you have run first 1 min Actions without delay and for remaining you
have properly calculate tween intervals.
It's long procedure and needs calculation. If you don't know any other simpler way try this. If you know pls let us know (Post it).

Specify starting time of track within URL for SoundCloud?

Anyone know if there's a way to generate URL's for SoundCloud tracks that specify a start time for the song? I'm looking for a way to force playback of streams at a certain time in the stream without having to do any processing on my end via the API.
As #bsz correctly noticed, we have released a way of specifying start time on the sound when linking to it, append #t=12s to the sound's URL to start it at 12th second, etc.
If the audio is long enough, you can use (e.g.) #t=2h10m12s.
They seem to have added a #t option but not sure if you can also give an stop time:
https://soundcloud.com/razhavaniazha-com/06-06-2013-razhavaniazha-afn#t=54:00
You can also specify a start time one minute or over by appending
#XmXs
If you want a time right on the minute mark (e.g. 3:00), include 0 seconds (0s) in the time code (i.e. 3m0s), or else the start time will be ignored. It does not appear to support a start time over an hour.
Click share and set or pick time

GPUImageMovieWriter frame presentationTime

I have a GPUImageColorDodgeBlend filter with two inputs connected:
A GPUImageVideoCamera which is getting frames from the iPhone video camera.
A GPUImageMovie which is an (MP4) video file that I want to have laid over the live camera feed.
The GPUImageColorDodgeBlend is then connected to two outputs:
A GPUImageImageView to provide a live preview of the blend in action.
A GPUImageMovieWriter to write the movie to storage once a record button is pressed.
Now, before the video starts recording, everything works OK 100% of the time. The GPUImageVideo is blended over the live camera video fine, and no issues or warnings are reported.
However, when the GPUImageMovieWriter starts recording, things start to go wrong randomly. About 80-90% of the time, the GPUImageMovieWriter works perfectly, there are no errors or warnings and the output video is written correctly.
However, about 10-20% of the time (and from what I can see, this is fairly random), things seem to go wrong during the recording process (although the on-screen preview continues to work fine).
Specifically, I start getting hundreds & hundreds of Program appending pixel buffer at time: errors.
This error originates from the - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex method in GPUImageWriter.
This issue is triggered by problems with the frameTime values that are reported to this method.
From what I can see, the problem is caused by the writer sometimes receiving frames numbered by the video camera (which tend to have extremely high time values like 64616612394291 with a timescale of 1000000000). But, then sometimes the writer gets frames numbered by the GPUImageMovie which are numbered much lower (like 200200 with a timescale of 30000).
It seems that GPUImageWriter is happy as long as the frame values are increasing, but once the frame value decreases, it stops writing and just emits Program appending pixel buffer at time: errors.
I seem to be doing something fairly common, and this hasn't been reported anywhere as a bug, so my questions are (answers to any or all of these are appreciated -- they don't all need to necessarily be answered sequentially as separate questions):
Where do the frameTime values come from -- why does it seem so arbitrary whether the frameTime is numbered according to the GPUImageVideoCamera source or the GPUImageMovie source? Why does it alternative between each -- shouldn't the frame numbering scheme be uniform across all frames?
Am I correct in thinking that this issue is caused by non-increasing frameTimes?
...if so, why does GPUImageView accept and display the frameTimes just fine on the screen 100% of the time, yet GPUImageMovieWriter requires them to be ordered?
...and if so, how can I ensure that the frameTimes that come in are valid? I tried adding if (frameTime.value < previousFrameTime.value) return; to skip any lesser-numbered frames which works -- most of the time. Unfortunately, when I set playsAtActualSpeed on the GPUImageMovie this tends to become far less effective as all the frames end up getting skipped after a certain point.
...or perhaps this is a bug, in which case I'll need to report it on GitHub -- but I'd be interested to know if there's something I've overlooked here in how the frameTimes work.
I've found a potential solution to this issue, which I've implemented as a hack for now, but could conceivably be extended to a proper solution.
I've traced the source of the timing back to GPUImageTwoInputFilter which essentially multiplexes the two input sources into a single output of frames.
In the method - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex, the filter waits until it has collected a frame from the first source (textureInput == 0) and the second, and then forwards on these frames to its targets.
The problem (the way I see it) is that the method simply uses the frameTime of whichever frame comes in second (excluding the cases of still images for which CMTIME_IS_INDEFINTE(frameTime) == YES which I'm not considering for now because I don't work with still images) which may not always be the same frame (for whatever reason).
The relevant code which checks for both frames and sends them on for processing is as follows:
if ((hasReceivedFirstFrame && hasReceivedSecondFrame) || updatedMovieFrameOppositeStillImage)
{
[super newFrameReadyAtTime:frameTime atIndex:0]; // this line has the problem
hasReceivedFirstFrame = NO;
hasReceivedSecondFrame = NO;
}
What I've done is adjusted the above code to [super newFrameReadyAtTime:firstFrameTime atIndex:0] so that it always uses the frameTime from the first input and totally ignores the frameTime from the second input. So far, it's all working fine like this. (Would still be interested for someone to let me know why this is written this way, given that GPUImageMovieWriter seems to insist on increasing frameTimes, which the method as-is doesn't guarantee.)
Caveat: This will almost certainly break entirely if you work only with still images, in which case you will have CMTIME_IS_INDEFINITE(frameTime) == YES for your first input'sframeTime.