How to getting acquired frames at full speed ? - Image Event Listener does not seem to be executing after every event - event-listener

My goal is to read out 1 pixel from the GIF camera in VIEW mode (live acquisition) and save it to a file every time the data is updated. The camera is ostensibly updating every 0.0001 seconds, because this is the minimum acquisition time Digital Micrograph lets me select in VIEW mode for this camera.
I can attach an Image Event Listener to the live image of the camera, with the message map (messagemap = "data_changed:MyFunctiontoExecute"), and MyFunctiontoExecute is being successfully ran, giving me a file with numerous pixel values.
However, if I let this event listener run for a second, I only obtain close to 100 pixel values, when I was expecting closer 10,000 (if the live image is being updated every 0.0001 seconds).
Is this because the Live image is not updated as quickly I think?

The event-listener certainly is executed at each event.
However, the live-display of a high-speed camera will near-certainly not update at each acquired-frame. It will either perform some sort of cumulative or sampled display. The exact answer will depend on the exact system you are on and configurations that are made.
It should be noted that super-high frame-rates can usually only be achieved by dedicated firmware and optimized systems. It's unlikely that a "general software approach" - in particular of interpreted and non-compiled code - will be able to provide the necessary speed. This type of approach the problem might be doomed from the start.
(Instead, one will likely have to create a buffer and then set-up the system to acquire data directly into the buffer at highest-possible frame rate. This will be coding the camera-acquisition directly)

Related

How to know the delay of frames between 2 videos, to sync an audio from video 1 to video 2?

world.
I have many videos that I want to compare one-to-one to check if they are the same, and get from there the delay of frames, let's say. What I do now is opening both video files with virtualdub and checking manually at the beginning of video 1 that a given frame is at position, i.e., 4325. Then I check video 2 to see the position of the same frame, i.e., 5500. That would make a delay of +1175 frames. Then I check at the end of the video 1 another given frame, position let's say 183038. I check too the video 2 (imagine the position is 184213) and I calculate the difference, again +1175: eureka, same video!
The frame I chose to compare aren't exactly random, it must be one that I know it is exactly one I can compare to (for example, a scene change, an explosion that appears from one frame to another, a dark frame after a lighten one...) and I always try to check for the first comparison frames within the first 10000 positions and for the second check I take at the end.
What I do next is to convert the audio from video 1 to video 2 calculating the number of ms needed, but I don't need help with that. I'd love to automatize the comparison so I just have to select video 1 and video 2, nothing else, that way I could forget forever virtualdub and save a lot of time.
I'm tagging this post as powershell too because I'm making a script where at the moment I have to introduce the delay between frames (after comparing manually) myself. It would be perfect that I could add this at the beginning of the script.
Thanks!

Psychopy: delayed picture display in an EEG experiment

I'm running an EEG experiment with pictures and I send trigger over the parallel port. I added triggers to my code via Psychopy builder and synchronized it to screen refresh. I used a photo diode to test if pictures are displayed at exactly same time as the trigger is sent and I find irregular delays: a triggers is sent between 5ms and 26ms earlier than an image is actually displayed.
I don't think that an image size is an issue as I observed the delays even when I replaced pictures with a small-size white image. Moreover, there is an ISI period of half a second before a picture display which should help. I was told by the technicians that the graphic card or a cable should not be an issue. Does anyone have an idea why I get these delays and how it could be solved?
Due to the comments, I'm adding a piece of code that sends a trigger:
# *image_training_port* updates
if t >= 4.0 and image_training_port.status == NOT_STARTED:
# keep track of start time/frame for later
image_training_port.tStart = t # underestimates by a little under one frame
image_training_port.frameNStart = frameN # exact frame index
image_training_port.status = STARTED
win.callOnFlip(image_training_port.setData, int(triggers_image_training))
if image_training_port.status == STARTED and t >= (4.0 + (0.5-win.monitorFramePeriod*0.75)): #most of one frame period left
image_training_port.status = STOPPED
win.callOnFlip(image_training_port.setData, int(0))
Actually, this is most likely due to the monitor itself. Try swapping in a different monitor.
Explanation: Flat panel displays often do some "post-processing" on the frame pixels to make them look prettier (for flat panel TVs almost all of them do this). The post-processing is not only unwanted for the fact that it alters your carefully calibrated stimulus, but also because it can introduce delays if it takes longer than a few ms to perform. PsychoPy (or any software) can't detect this - it can only know about the time the frame was flipped at the level of the graphics card, not what happens after this.

How to transition from a prerecorded video to real time video?

I have come up with an algorithm on Matlab, that permits me to recognize hand gestures on prerecorded videos. Now, I would like to run the same code but for real time video this time, I am not sure how to do it after putting these 2 lines:
vid=videoinput('winvideo',1);
preview(vid);
(real time video is on)
I am thinking about a loop : while the video is on, snap repeatedly some images in order to analyze them.
for k=1:numFrames
my code is applied here.
So, I would like to know how to make this transition from prerecorded videos to real time video.
Your help in this so much appreciated!
I would suggest you to first verify whether you can perform acquisition + gesture recognition in real-time using your algorithm. for that, first read video frames in a loop and render or save them and compute the reading and rendering overhead of a single frame say t1. Also compute the time taken by your algorithm to process one image say t2. The throughput(no. of frames process per second) of your system will be
throughput = 1/(t1 + t2)
It is important to know how many frames you need to process a gesture. First, try to compute the minimum no. of images that you need to identify a gesture in a given time and then verify in real-time whether you can process the same no. of images in the same time.

Beginner at kinect - matlab : Kinect does not start

hello I am trying to set up kinect1 in matlab environment and I cannot get joint coordinates out of kinect even though I have a preview of the captured depth. In the preview it says " waiting to start" when I actually started the video.
There are two different features which you don't want to mix up:
There is the preview function. By calling preview(vid) a preview window is opened and the camera runs. The preview is there to help you set up your camera, point it to the right spot etc. When you're finished with that, close the preview manually or via closepreview(vid).
When you are ready for image acquisition, call start(vid). With img = getdata(vid,1) you can then read 1 frame from the camera and save it to img. When you're finished with acquisition, call close(vid) to stop the camera.
The camera itsself starts capturing images as soon as start is called, so even if you wait some seconds after calling start, the first image will be the one captured right then. There exist several properties to control the acquisition, it is best to have a look at all properties of vid.
You can manually specify a trigger to take an image by first setting triggerconfig(vid,'manual'), then starting the camera and finally calling trigger(vid) to take an image.
The number of frames that is acquired after calling start or trigger is specified by the FramesPerTrigger parameter of vid. To continuously acquire images, set it to inf. It is possible to use getdata to read any number of frames, e.g. getdata(vid,5);. Note that this only works if 5 frames are actually available on the camera. You get the number of available frames from the FramesAvailable property of vid.
You can put the image acquisition inside a for loop to continuously acquire images
n = 1000;
vid = videoinput('kinect',2);
set(vid,'FramesPerTrigger',n);
start(vid);
for k=1:n
img = getdata(vid,1);
% do magic stuff with img
end
stop(vid);

Is there a better / faster way to process camera images than Quartz 2D?

I'm currently working on an iphone app that lets the user take a picture with the camera and then process it using Quartz 2D.
With Quartz 2D I transform the context to make the picture appear with the right orientation (scale and translate because it's mirrored) and then I stack a bunch of layers whith blending modes to process the picture.
The initial (and the final result) picture is 3mp or 5mp depending on device and it takes a great amount of memory once drawn. Reminder : it's not a jpeg in memory, it's bitmap data.
My layers are the same size as the initial picture so every time i draw a new layer on top of my picture i need the current picture state in memory (A) + the layer to blend memory space (B) + the space in memory to write the result (C).
When i get the result i ditch "A" and "B", takes "C" to the next stage of processing where it become the new "A"...
I need 4 pass like this to obtain the finale picture.
Giving the resolution of these pictures my memory usage can climb high.
I can see a peek at 14Mo-15Mo and most of the time i only get level 1 warnings but level 2s sometimes wave at me and kill my app.
Am i doing this the right way regarding general process ?
Is there a way to speed up the processing ?
Why oh why memory warnings spawn randomly ?
Why the second picture process is longer than the first as you can see in this pic:
Because the duration looks to be about twice as long; I'd say it was doing twice as much processing. Does the third photo taken take three times as long?
If so, that would seem to indicate it's processing all previous photos / layers taken. Which - of course - is a bug in your code somewhere.