How to fix intervals in which trigger signals are present (in biopac)? - triggers

I did an experiment on emotion recognition in Psychopy. 60 videos were shown. At the beginning and end of each video, a trigger signal was sent from Psychopy to Biopac. This trigger signal indicates when the video starts and when it ends. Now to my question: In the analyze in Biopac I want to fix only the time interval in which the trigger signal was present (i.e. only the time in which videos were shown). How can I automatically fix these intervals in which the trigger signal is there?
How can I fix fixed intervals that are then analyzed? I want to fix exactly the intervals in which the trigger signal is present. Does anyone know more about this ?

Related

Beckhoff - Ramp that can be interrupted/updated while ramping operation is in progress

I am looking for a way to ramp lights properly. This function block looked like a good candidate :
https://infosys.beckhoff.com/english.php?content=../content/1033/tcplclibbabasic/11640060811.html&id=
Unfortunately, it will ignore any subsequent nEndLevel updates. So while ramp is in progress any new values of nEndLevel
Are ignored where what is usually needed from this type of ramp is to stop the current ramping operation and start a new one as soon as new nEndLevel value is received.
Is there any other ramp function block in the Beckhoff library that can do that ?
I need a ramp that can be interrupted/updated while ramping operation is in progress basically.
I don't think you need another function block...
A rising-edge at bStart starts dimming the light from the actual to the end-level (nEndLevel)
bStart: This input starts the dim-ramp from the actual value to nEndLevel within the time defined as tRampTime. This can be interrupted by bOn, bOff or bToggle at any time.
As I understand it, a rising edge on bStart starts a new ramp.
You could try to detect the change in the value of nEndLevel and generate a rising edge on bStart, which should make it generate a new ramp. For example, using a temporary variable nEndLevel_old that retain the old value for comparison in the next cycle.

How to pause a for loop in MATLAB to create snapshots?

So I'm sunning a code which solves an equation over a given period of time. As one would expect, the solution will change as time evolves so I wanted to create some sort of for loop which has the effect of creating "snapshots", which is to say I intend to pause the output at set intervals.
How might I go about implementing such a function?
I was hoping to implement something which pauses the code at defined intervals and then resumes execution on a button press.

Callbacks: Difference between DAQmxRegisterDoneEvent() and DAQmxEveryNSamplesEvent

Trying to figure out how callback wrappers are called specifically. Our code deals with a slowTask and an onTask. During a slowTask, I deal with the following two lines (specific to this question):
DAQmxCfgSampClkTiming(slowTask, "OnboardClock", GUI_RATE,
DAQmx_Val_Rising, DAQmx_Val_ContSamps, 1);
DAQmxRegisterEveryNSamplesEvent(slowTask, DAQmx_Val_Acquired_Into_Buffer, 1,
0, EveryNCallbackWrapper, this);
I understand that here, everytime the buffer fills up with one sample, EveryNCallbackWrapper will be called.
For an onTask, I have a hardtime understanding how the callback gets called. I consulted the NI documentation but couldn't quite get it.
DAQmxCfgSampClkTiming(onTask, "OnboardClock", ON_RATE, DAQmx_Val_Rising,
DAQmx_Val_FiniteSamps, 100);
DAQmxRegisterDoneEvent(onTask, 0, DoneCallbackWrapper, this);
This one boggles my mind a bit more. I believe that whenever onTask is triggered (with a hardware trigger), the DAQ starts taking and digitizing analog measurements at ON_RATE samples/second and once 100 samples are taken/read into the DAQs buffer, the DoneCallbackWrapper() is called. Depending on how long this hardware trigger stays high, this wrapper will be called every time the DAQ reads 100 samples (while trigger is high) OR will the callback be called only once after 100 samples were read?
The callback is called only once after 100 samples were read
Because slowTask uses DAQmx_Val_ContSamps, the program asks for an infinite (aka continuous) acquisition where data is streamed to the host. Using the EveryNSamples callback allows the program to access and process the newest data that was sent by the device.
Because onTask uses DAQmx_Val_FiniteSamps, the program asks for a single acquisition of 100 samples. Using the Done event allows the program to access and process the complete and full acquisition.
In your comment update, the program uses
DAQmxCfgDigEdgeStartTrig(onTask, "/PXI2Slot4/PXI_Trig0", DAQmx_Val_Rising));
to configure a digital edge start trigger for onTask. When that trigger line has a rising edge, the onTask acquisition begins, captures 100 samples, stops, and invokes the callback.
If the program needs to acquire 100 samples for onTask for every rising edge on /PXI2Slot4/PXI_Trig0, you can use the retriggerable property on the NI 63xx series devices that allows the same task to re-run for each trigger event.
More details are in the X Series User Manual:
The AI Start Trigger is also configurable as retriggerable. The timing engine generates the sample and convert clocks for the configured acquisition in response to each pulse on an AI Start Trigger signal.
The timing engine ignores the AI Start Trigger signal while the clock generation is in progress. After the clock generation is finished, the counter waits for another Start Trigger to begin another clock generation. Figure 4-22 shows a retriggerable analog input with three AI channels and four samples per trigger

Matlab Synchronize acquisition with Color/Depth of Kinect

I'm having trouble to synchronize the color and depth image with Image acquisition ToolBox.
Currently, I'm just trying to log both streams into binary files without frame drop or losing the synchronization.
I don't try to render during my recording.
The code for the start button :
colorVid = videoinput('kinect',1); depthVid = videoinput('kinect',2);
colorVid.FramesPerTrigger = inf; depthVid.FramesPerTrigger = inf;
triggerconfig([colorVid depthVid],'manual');
iatconfigLogging(colorVid,'Video/Color.bin');
iatconfigLogging(depthVid,'Video/Depth.bin');
start([colorVid depthVid]);
pause(2); % this is to be sure both sensor are start before the trigger
trigger([colorVid depthVid]);
where the iatconfigureLogging() is from here
and the stop button just doing
stop([colorVid depthVid]);
Since the frames rate of the Kinect is 30FPS and we can't change this, I'm using FrameGrabInterval to emulate it.
But when I over like 5FPS, I can't log depth and color and keep the frames synchronized for more then 20-25 seconds. And except 1 FPS, the sync is over after 2-3 minutes and I'm looking for at least a 10-15 minutes acquisition.
I'm looking on something with like flushdata(obj,'triggers'); right now, but I don't figure it out how to keep the 30 FPS with the logging.
Thanks in advance for any one who will give something.
As far as I know you cannot synchronize the streams by triggering because they are not synchronized in hardware. I tried it and the best I could come up with was timestamping each stream and throwing away frame pairs that were too far apart in time. I noticed the classic frequency offset effect whereby the streams move in and out of synch with inverse frequency of the difference between the periods of each. The obvious disadvantage of throwing away frames this is that you get a non-continuous stream.
You can get timestamp information using
[data time] = getdata(vid,1);

iOS - Speed Issues

Hey all, I've got a method of recording that writes the notes that a user plays to an array in real time. The only problem is that there is a slight delay and each sequence is noticeably slowed down when playing back. I upped the speed of playback by about 6 miliseconds, and it sounds right, but I was wondering if the delay would vary on other devices?
I've tested on an ipod touch 2nd gen, how would that preform on 3rd, and 4th as well as iphones? do I need to test on all of them and find the optimal delay variation?
Any Ideas?
More Info:
I use two NSThreads instead of timers, and fill an array with blank spots where no notes should play (I use integers, -1 is a blank). Every 0.03 seconds it adds a blank when recording. Every time the user hits a note, the most recent blank is replaced by a number 0-7. When playing back, the second thread is used, (2 threads because the second one has a shorter time interval) that has a time of 0.024. The 6 millisecond difference compensates for the delay between the recording and playback.
I assume that either the recording or playing of notes takes longer than the other, and thus creates the delay.
What I want to know is if the delay will be different on other devices, and how I should compensate for it.
Exact Solution
I may not have explained it fully, that's why this solution wasn't provided, but for anyone with a similar problem...
I played each beat similar to a midi file like so:
while playing:
do stuff to play beat
new date xyz seconds from now
new date now
while now is not > date xyz seconds from now wait.
The obvious thing that I was missing was to create the two dates BEFORE playing the beat...
D'OH!
It seems more likely to me that the additional delay is caused by the playback of the note, or other compute overhead in the second thread. Grab the wallclock time in the second thread before playing each note, and check the time difference from the last one. You will need to reduce your following delay by any excess (likely 0.006 seconds!).
The delay will be different on different generations of the iphone, but by adapting to it dynamically like this, you will be safe as long as the processing overhead is less than 0.03 seconds.
You should do the same thing in the first thread as well.
Getting high resolution timestamps - there's a a discussion on apple forums here, or this stackoverflow question.