Psychopy: delayed picture display in an EEG experiment - triggers

I'm running an EEG experiment with pictures and I send trigger over the parallel port. I added triggers to my code via Psychopy builder and synchronized it to screen refresh. I used a photo diode to test if pictures are displayed at exactly same time as the trigger is sent and I find irregular delays: a triggers is sent between 5ms and 26ms earlier than an image is actually displayed.
I don't think that an image size is an issue as I observed the delays even when I replaced pictures with a small-size white image. Moreover, there is an ISI period of half a second before a picture display which should help. I was told by the technicians that the graphic card or a cable should not be an issue. Does anyone have an idea why I get these delays and how it could be solved?
Due to the comments, I'm adding a piece of code that sends a trigger:
# *image_training_port* updates
if t >= 4.0 and image_training_port.status == NOT_STARTED:
# keep track of start time/frame for later
image_training_port.tStart = t # underestimates by a little under one frame
image_training_port.frameNStart = frameN # exact frame index
image_training_port.status = STARTED
win.callOnFlip(image_training_port.setData, int(triggers_image_training))
if image_training_port.status == STARTED and t >= (4.0 + (0.5-win.monitorFramePeriod*0.75)): #most of one frame period left
image_training_port.status = STOPPED
win.callOnFlip(image_training_port.setData, int(0))

Actually, this is most likely due to the monitor itself. Try swapping in a different monitor.
Explanation: Flat panel displays often do some "post-processing" on the frame pixels to make them look prettier (for flat panel TVs almost all of them do this). The post-processing is not only unwanted for the fact that it alters your carefully calibrated stimulus, but also because it can introduce delays if it takes longer than a few ms to perform. PsychoPy (or any software) can't detect this - it can only know about the time the frame was flipped at the level of the graphics card, not what happens after this.

Related

Drawable presented late, causes steady state delay

I have a little Swift playground that uses a Metal compute kernel to draw into a texture each time the mouse moves. The compute kernel runs very fast, but for some reason, as I start dragging the mouse, some unknown delays build up in the system and eventually the result of each mouse move event is displayed as much as 4 frames after the event is received.
All my code is here: https://github.com/jtbandes/metalbrot-playground
I copied this code into a sample app and added some os_signposts around the mouse event handler so I could analyze it in Instruments. What I see is that the first mouse drag event completes its compute work quickly, but the "surface queued" event doesn't happen until more than a frame later. Then once the surface is queued, it doesn't actually get displayed at the next vsync, but the one after that.
The second mouse drag event's surface gets queued immediately after the compute finishes, but it's now stuck waiting for another vsync because the previous frame was late. After a few frames, the delay builds and later frames have to wait a long time for a drawable to be available before they can do any work. In the steady state, I see about 4 frames of delay between the event handler and when the drawable is finally presented.
What causes these initial delays and can I do something to reduce them?
Is there an easy way to prevent the delays from compounding, for example by telling the system to automatically drop frames?
I still don't know where the initial delay came from, but I found a solution to prevent the delays from compounding.
It turns out I was making an incorrect assumption about mouse events. Mouse events can be delivered more frequently than the screen updates — in my testing, often there is less than 8ms between mouse drag events and sometimes even less than 3ms, while the screen updates at ~16.67ms intervals. So the idea of rendering the scene on each mouse drag event is fundamentally flawed.
A simple way to work around this is to keep track of queued draws and simply don't begin drawing again if another drawable is still queued. For example, something like:
var queuedDraws = 0
// Mouse event handler:
if queuedDraws > 1 {
return // skip this frame
}
queuedDraws += 1
// While drawing
drawable.addPresentedHandler { _ in
queuedDraws -= 1
}

How to getting acquired frames at full speed ? - Image Event Listener does not seem to be executing after every event

My goal is to read out 1 pixel from the GIF camera in VIEW mode (live acquisition) and save it to a file every time the data is updated. The camera is ostensibly updating every 0.0001 seconds, because this is the minimum acquisition time Digital Micrograph lets me select in VIEW mode for this camera.
I can attach an Image Event Listener to the live image of the camera, with the message map (messagemap = "data_changed:MyFunctiontoExecute"), and MyFunctiontoExecute is being successfully ran, giving me a file with numerous pixel values.
However, if I let this event listener run for a second, I only obtain close to 100 pixel values, when I was expecting closer 10,000 (if the live image is being updated every 0.0001 seconds).
Is this because the Live image is not updated as quickly I think?
The event-listener certainly is executed at each event.
However, the live-display of a high-speed camera will near-certainly not update at each acquired-frame. It will either perform some sort of cumulative or sampled display. The exact answer will depend on the exact system you are on and configurations that are made.
It should be noted that super-high frame-rates can usually only be achieved by dedicated firmware and optimized systems. It's unlikely that a "general software approach" - in particular of interpreted and non-compiled code - will be able to provide the necessary speed. This type of approach the problem might be doomed from the start.
(Instead, one will likely have to create a buffer and then set-up the system to acquire data directly into the buffer at highest-possible frame rate. This will be coding the camera-acquisition directly)

pyglet: synchronise event with frame drawing

The default method of animation is to have an independent timer set to execute at the same frequency as the frame rate. This is not what I'm looking for because it provides no guarantee the event is actually executed at the right time. The lack of synchronisation with the actual frame drawing leads to occasional animation jitters. The obvious solution is to have a function run once for every frame, but I can't find a way to do that. on_draw() only runs when I press a key on the keyboard. Can I get on_draw() to run once per frame, synchronised with the frame drawing?
The way to do this is to use pyglet.clock.schedule(update), making sure vsync is enabled (which it is by default). This ensures on_draw() is run once per frame. The update function passed to pyglet.clock.schedule() doesn't even need to do anything. It can just be blank. on_draw() is now being executed once per frame draw, so there's no point in having separate functions both of which are being executed once per frame drawing. It would have been a lot nicer if there were just an option somewhere in the Window class saying on_draw() should be drawn once per second, but at least it's working now.
I'm still getting bad tearing, but that's a bug in pyglet on my platform or system. There wasn't supposed to be any tearing.

Cocos2d. Diffuse image (60 fps)

The game was created by support cocos2d 0.99.5 and Box2d.
Iphone SDK 4.3
We have a character. When a character moves quickly, it looks blurred (fuzzy // unfocused). On a simulator and on device (iPhone 3G).
To move a character using mouseJoint (dampingRatio = 0 // frequencyHz = -1).
In the screenshot image clearly. link
The character is focused. The screenshot not transfer problems.
All the time 60 fps.
Tried params:
use kCCDirectorProjection2D // 3D
alies // antialies to texture params
CC_COCOSNODE_RENDER_SUBPIXEL 1 and 0
Video sample: link
How to get a clear image of the character during the move?
I also had a problem like this and fixed it by changing this line in ccConfig.h:
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 0
to
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 1
This is the comment for this define, maybe it helps someone.
/** #def CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
If enabled, the texture coordinates will be calculated by using this formula:
- texCoord.left = (rect.origin.x*2+1) / (texture.wide*2);
- texCoord.right = texCoord.left + (rect.size.width*2-2)/(texture.wide*2);
The same for bottom and top.
This formula prevents artifacts by using 99% of the texture.
The "correct" way to prevent artifacts is by using the spritesheet-artifact-fixer.py or a similar tool.
Affected nodes:
- CCSprite / CCSpriteBatchNode and subclasses: CCLabelBMFont, CCTMXTiledMap
- CCLabelAtlas
- CCQuadParticleSystem
- CCTileMap
To enabled set it to 1. Disabled by default.
#since v0.99.5
*/
I am pretty sure that what you are describing is an optical illusion. LCDs, especially lower-quality LCDs, have a finite response time. If this response time is too slow, it can cause ghosting, i.e. the moving object looks smeared. Basically what's happening is the previous frame's (or several frames') pixels take a long time to actually "turn off" and you see fainter versions of your sprite left behind as it moves.
With regards to your comment:
For the experiment, I took a pencil and put it to a sheet of paper
began to move quickly. Eyes see a pencil in focus, then problem is not
an optical effect, a code problems
Looking at a moving object in the real world is not the same as looking at a moving object on the screen, with or without a poor display response time. The real-world object moves continuously, but the screen object moves in discrete steps. Your eye can follow the pencil exactly and keep the image sharp on your retina. If you follow a screen image, however, your eye moves smoothly, while the screen image "jumps" from place to place. This can cause a "juddering" effect for sufficiently fast-moving objects, even at high framerates. If 60fps is still juddery, there is basically no way around this; it is a limitation of current technology.

Is there a better / faster way to process camera images than Quartz 2D?

I'm currently working on an iphone app that lets the user take a picture with the camera and then process it using Quartz 2D.
With Quartz 2D I transform the context to make the picture appear with the right orientation (scale and translate because it's mirrored) and then I stack a bunch of layers whith blending modes to process the picture.
The initial (and the final result) picture is 3mp or 5mp depending on device and it takes a great amount of memory once drawn. Reminder : it's not a jpeg in memory, it's bitmap data.
My layers are the same size as the initial picture so every time i draw a new layer on top of my picture i need the current picture state in memory (A) + the layer to blend memory space (B) + the space in memory to write the result (C).
When i get the result i ditch "A" and "B", takes "C" to the next stage of processing where it become the new "A"...
I need 4 pass like this to obtain the finale picture.
Giving the resolution of these pictures my memory usage can climb high.
I can see a peek at 14Mo-15Mo and most of the time i only get level 1 warnings but level 2s sometimes wave at me and kill my app.
Am i doing this the right way regarding general process ?
Is there a way to speed up the processing ?
Why oh why memory warnings spawn randomly ?
Why the second picture process is longer than the first as you can see in this pic:
Because the duration looks to be about twice as long; I'd say it was doing twice as much processing. Does the third photo taken take three times as long?
If so, that would seem to indicate it's processing all previous photos / layers taken. Which - of course - is a bug in your code somewhere.