Given FrameRate does not match with real FrameRate - matlab

I am testing my laptop integrated cam, and after writing this simple code, FrameRates doesn't match. It shows that it's 30fps, but after calc I can see it's the half, 15fps. This is the code:
vid = videoinput('winvideo',1);
frameRates = set(getselectedsource(vid), 'FrameRate')
vid.FramesPerTrigger = 30;
start(vid); [frames, timeStamp] = getdata(vid);
frameRateCalc = 1/mean(diff(timeStamp))
And this is the output
>> framerate
frameRates =
'30.0000'
frameRateCalc =
15.2007
I have tested with another external cam and seems to work fine, 30 match with 30. Anybody knows why they doesn't match with the integrated cam?

Related

Most efficient way to track multiple small objects in MATLAB?

I am relatively new to image processing, and have never attempted to do anything with images in MATLAB, so forgive me if i am making some very rookie errors.
I am attempting to make a program that will track ants in a video. The video is taken from a stationary camera, and records the ants from a birds-eye perspective. I am having issues making reliable tracks of the ants however. Initially, i used the ForegroundDetection function, however there were multiple issues:
1.) Stationary ants were not detected
2.) There was too much overlap between objects (high levels of occlusion)
A friend of mine recommended having a larger gap between compared frames, so instead of subtracting frame 1 from frame 2, do frame 1 from frame 30 instead (1 second apart), as this will make the ants that do not move as much more likely to appear on the subtracted image.
Below is the code i have so far. It is a bit of a shot-in-the-dark attempt to solve the problem, as i am running out of ideas:
i = 1;
k = 1;
n = 1;
Video = {};
SubtractedVideo = {};
FilteredVideo = {};
videoFReader = vision.VideoFileReader('001.mp4',...
'ImageColorSpace', 'Intensity', 'VideoOutputDataType', 'uint8');
videoPlayer = vision.VideoPlayer;
blobby = vision.BlobAnalysis('BoundingBoxOutputPort', true, ...
'AreaOutputPort', true, 'CentroidOutputPort', true, ...
'MinimumBlobArea', 1);
shapeInserter = vision.ShapeInserter('BorderColor','White');
while ~isDone(videoFReader) %Read all frame of video
frame = step(videoFReader);
Video{1, i} = frame;
i = i+1;
end
%Perform subtraction
for j=1: 1: numel(Video)-60
CurrentFrame = Video{1,j};
FutureFrame = Video{1,j+60};
SubtractedImage = imsubtract(CurrentFrame, FutureFrame);
SubtractedVideo{1,j} = SubtractedImage;
ImFiltered = imgaussfilt(SubtractedImage, 2);
BWIm = im2bw(ImFiltered, 0.25);
FilteredVideo{1,j} = BWIm;
end
for a = n:numel(FilteredVideo)
frame = Video{1, n};
bbox = step(blobby, FilteredVideo{1, k});
out = step(shapeInserter, frame, bbox);
step(videoPlayer, out);
k = k+1;
end
currently, when i run the code, i get the following error on the line out = step(shapeInserter, frame, bbox):
'The Points input must be a 4-by-N matrix. Each column specifies a different rectangle and is of the form [row column height width].'
My questions is:
1.) Is this the best way to try and solve the problem I'm having? Is there potentially an easier solution?
2.) What does this error mean? How do i solve the issue?
I appreciate any help anyone can give, thank you!

Image Processing - Dress Segmentation using opencv

I am working on dress feature identification using opencv.
As a first step, I need to segment t-shirt by removing face and hands from the image.
Any suggestion is appreciated.
I suggest the following approach:
Use Adrian Rosebrock's skin detection algorithm for detecting the skin (thank you for Rosa Gronchi for his comment).
Use region growing algorithm on the variance map. The initial seed can be calculated by using stage 1(see the attached code for more information).
code:
%stage 1: skin detection - Adrian Rosebrock solution
im = imread(<path to input image>);
hsb = rgb2hsv(im)*255;
skinMask = hsb(:,:,1) > 0 & hsb(:,:,1) < 20;
skinMask = skinMask & (hsb(:,:,2) > 48 & hsb(:,:,2) < 255);
skinMask = skinMask & (hsb(:,:,3) > 80 & hsb(:,:,3) < 255);
skinMask = imclose(skinMask,strel('disk',6));
%stage 2: calculate top, left and right centroid from the different connected
%components of the skin
stats = regionprops(skinMask,'centroid');
topCentroid = stats(1).Centroid;
rightCentroid = stats(1).Centroid;
leftCentroid = stats(1).Centroid;
for x = 1 : length(stats)
centroid = stats(x).Centroid;
if topCentroid(2)>centroid(2)
topCentroid = centroid;
elseif centroid(1)<leftCentroid(1)
leftCentroid = centroid;
elseif centroid(1)>rightCentroid(1)
rightCentroid = centroid;
end
end
%first seed - the average of the most left and right centroids.
centralSeed = int16((rightCentroid+leftCentroid)/2);
%second seed - a pixel which is right below the face centroid.
faceSeed = int16(topCentroid);
faceSeed(2) = faceSeed(2)+40;
%stage 3: std filter
varIm = stdfilt(rgb2gray(im));
%stage 4 - region growing on varIm using faceSeed and centralSeed
res1=regiongrowing(varIm,centralSeed(2),centralSeed(1),8);
res2=regiongrowing(varIm,faceSeed(2),faceSeed(1),8);
res = res1|res2;
%noise reduction
res = imclose(res,strel('disk',3));
res = imopen(res,strel('disk',2));
result after stage 1(skin detection):
final result:
Comments:
Stage 1 is calculated using the following algorithm.
The region growing function can be downloaded here.
The solution is not perfect. For example, it may fail if the texture of the shirt is similar to the texture of the background. But I think that it can be a good start.
Another improvement which can be done is to use a better region growing algorithm, which doesn't grows into the skinMask location. Also, instead of using the region growing algorithm twice independently, the result of the second call of region growing can can be based on the result from the first one.

How to make oscillator-based kick drum sound exactly the same every time

I’m trying to create a kick drum sound that must sound exactly the same when looped at different tempi. The implementation below sounds exactly the same when repeated once every second, but it sounds to me like every other kick has a higher pitch when played every half second. It’s like there is a clipping sound or something.
var context = new AudioContext();
function playKick(when) {
var oscillator = context.createOscillator();
var gain = context.createGain();
oscillator.connect(gain);
gain.connect(context.destination);
oscillator.frequency.setValueAtTime(150, when);
gain.gain.setValueAtTime(1, when);
oscillator.frequency.exponentialRampToValueAtTime(0.001, when + 0.5);
gain.gain.exponentialRampToValueAtTime(0.001, when + 0.5);
oscillator.start(when);
oscillator.stop(when + 0.5);
}
for (var i = 0; i < 16; i++) {
playKick(i * 0.5); // Sounds fine with multiplier set to 1
}
Here’s the same code on JSFiddle: https://jsfiddle.net/1kLn26p4/3/
Not true; oscillator.start will begin the phase at 0. The problem is that you're starting the "when" parameter at zero; you should start it at context.currentTime.
for (var i = 0; i < 16; i++) {
playKick(context.current time + i * 0.5); // Sounds fine with multiplier set to 1
}
The oscillator is set to start at the same time as the change from the default frequency of 440 Hz to 150 Hz. Sometimes this results in a glitch as the transition is momentarily audible.
The glitch can be prevented by setting the frequency of the oscillator node to 150 Hz at the time of creation. So add:
oscillator.frequency.value = 150;
If you want to make the glitch more obvious out of curiosity, try:
oscillator.frequency.value = 5000;
and you should be able to hear what is happening.
Updated fiddle.
EDIT
In addition the same problem is interacting with the timing of the ramp. You can further improve the sound by ensuring that the setValueAtTime event always occurs a short time after playback starts:
oscillator.frequency.setValueAtTime(3500, when + 0.001);
Again, not perfect at 3500 Hz, but it's an improvement, and I'm not sure you'll achieve sonic perfection with Web Audio. The best you can do is try to mask these glitches until implementations improve. At actual kick drum frequencies (e.g. the 150 Hz in your original Q.), I can't tell any difference between successive sounds. Hopefully that's good enough.
Revised fiddle.

Unable to acquire *precisely* timed videos in Matlab (using ImgAcq toolbox)

When I try to record using the code below, the resulting videos have the correct number of frames and file lengths, but the recorded time is always slightly longer (as measured by filming a digital clock), a 60min recordings capture ~61min, and 5min recordings have extra 3-5sec (so time is skipped during recordings). Occasionally, the camera clearly show time skipping when it records for some time, then pauses for up to hour or so, and then resumes.
I am using a Basler GigE acA1300-60gm (http://www.baslerweb.com/en/products/area-scan-cameras/ace/aca1300-60gm) camera set to continuous triggering for several hours, and I need the acquired videos to have millisecond resolution. I am not sure why the recording times are so varied, am I using a wrong script for the job or does it have something to do with the hardware settings?
(Matlab R2014a on Windows 7)
vid = videoinput(adapter, deviceIDVar{1,1}, formatVar);
vid.FramesPerTrigger = NoOfFramesPerFile;
src = getselectedsource(vid);
src.FrameStartTriggerMode = 'On';
src.FrameStartTriggerSource = 'Line1';
src.FrameStartTriggerActivation = 'RisingEdge';
src.FrameStartTriggerMode = 'Off';
src.PacketSize = 8000;
triggerconfig(vid, 'hardware', 'DeviceSpecific', 'DeviceSpecific');
vid.TriggerRepeat = 0;
vid.LoggingMode = 'disk';
for i=1:FileLimit
%file path and format settings
diskLogger = VideoWriter(filenameWithExt, 'MPEG-4');
diskLogger.FrameRate = 25;
vid.DiskLogger = diskLogger;
start(vid)
wait(vid, Inf);
end
stop(vid)
delete(vid)
clear
Are here any better ways to acquire precisely timed videos (at 25FPS)? Thanks in advance!

Detecting frames for which a face appears in a video

I need to detect the number of frames for which a face is appearing in a video. I looked into the sample code using CAMShift algorithm provided in the MathWorks site(http://www.mathworks.in/help/vision/examples/face-detection-and-tracking-using-camshift.html). Is there a way of knowing whether a face has appeared in a particular frame?
I'm new to MatLab. I'm assuming the step function will return a false value if no face is detected (condition fails - similar to C). Is there a possible solution? I think using MinSize is also a possible solution.
I am not concerned about the computational burden - although a faster approach for the same would be appreciated. My current code is given below:
clc;
clear all;
videoFileReader = vision.VideoFileReader('Teapot.mp4', 'VideoOutputDataType', 'uint8', 'ImageColorSpace', 'Intensity');
video = VideoReader('Teapot.mp4');
numOfFrames = video.NumberOfFrames;
faceDetector = vision.CascadeObjectDetector();
opFolder = fullfile(cd, 'Face Detected Frames');
frameCount = 0;
shotCount = 0;
while ~isDone(videoFileReader)
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
framCount = frameCount + 1;
for i = 1:size(bbox,1)
shotCount = shotCount + 1;
rectangle('Position',bbox(i,:),'LineWidth', 2, 'EdgeColor', [1 1 0]);
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
progIndication = sprintf('Face has been detected in frame %d of %d frames', shotCount, numOfFrames);
figure, imshow(videoOut), title(progIndication);
end
end
release(videoFileReader);
You can use the vision.CascadeObjectDetector object to detect faces in any particular frame. If it does not detect any faces, its step method will return an empty array. The problem is that the face detection algorithm is not perfect. Sometimes it detects false positives, i. e. detects faces where there are none. You can try to mitigate that my setting the MinSize and MaxSize properties, assuming you know what size faces you expect to find.