Convex Hull - Higher Dimension (20D) - scipy

I'm trying to find the convex hull vertices of a Higher Dimension Data (20 D). I have found open source alogorithms by QHULL which can handle upto 8D and sometimes 9D with less points.Are there any algorithms that can handle 20 Dimensions or more? Computation time is not an issue.

Related

Stability of pose estimation using n points

I am using chessboard to estimate translation vector between it and the camera. Firstly, the intrinsic camera parameters are calculated, then translation vector are estimated using n points detected from the chessboard.
I found a very strange phenomenon: the translation vector is accurate and stable when using more points in the chessboard, and such phenomenon is more obvious when the distance is farer. For instance, the square in the chessboard is 1cm*1cm, when the distance is 3m, translation vector is accurately estimated when using 25 points, while it is inaccurate and unstable using the minimal 4 points. However, when the distance is 0.6m, estimation results of translation vector using 4 points and 25 points are similar, which are all accurate.
How to explain this phenomenon (in theory)? what's the relationship between stable estimation result and distance, and number of points?
Thanks.
When you are using a smaller number of points, the calculation of the translation vector is more sensitive to the noise in coordinates of those points. Point coordinates are noisy due to a finite resolution of the camera (among other things). A that noise only increases with distance. So using a larger number of points should provide for a better estimation.

Perform 4d FFT and specify frequency domain

I encountered this problem while researching on image reconstruction of two-slab geometry. Suppose I have an array of 15 by 15 sources with spacing 1 on one plane, with an array of 57 by 57 detectors with spacing 0.5 on the other parallel plane. The total measurement data becomes hence a 4-d array of size 57 by 57 by 15 by 15. My first thing to do is to perform a 4-d (or more accurately double 2-d) Fourier transform to the data with respect to the detector lattice and source lattice respectively, and I want to use the built-in function fft2 in MATLAB. My code is as follows:
for s = 1:Ns
data(:,:,s) = fftshift(fft2(data(:,:,s))); % fft2 assumes exp(-i*omega*t)
end
data = reshape(data(:,:,1:Ns),[Nx,Ny,sqrt(Ns),sqrt(Ns)]);
for i = 1:Nx
for j = 1:Ny
data(i,j,:,:) = fftshift(fft2(squeeze(data(i,j,:,:))));
end
end
val = data;
In the above code, data is the measurement and is originally of size 57 by 57 by 225. Nx=Ny=57,Ns=225. Can anyone point out to me whether there is something going wrong in my above implementation of double 2-d FFT, or at least how I am able to verify it? My second question is about the frequency domain. According to doumentations of MATLAB, the frequency lattice with respect to detector plane should be (-28:28)*2*pi/57/0.5 (for both components), while the frequency lattice with respect to the source lattice should be (-7:7)*2*pi/15/1 (for both components also). Is that true? Can one say that the element at position (29,29,8,8) of val represents exactly the 0 frequency both for the detector frequency and for the source frequency? Thanks in advance!

PCA with correlated dimensions

I am trying to describe the limit cycle of the waveforms of the 'arms' of a swimming alga in terms of the shape scores of its principal components. So I have the shapes of the arms stored in terms of xy coordinates at nodes distributed along the arc length of the arm. I am trying to do a principal component analysis on this, but I am struggling a bit.
Before, I had the shapes described in terms of curvature along the arc length. Each curve had 25 nodes, so I got a nice 25x25 covariance matrix. The analysis was very straightforward, everything worked fine.
Now for reasons irrelevant here, it is more convenient to have the curves described in terms of x and y values of the nodes. So 25 nodes with an x and a y value. So 50 features per sample, although features 1:25 and 26:50 form 'sets'.
This can be viewed as a matrix of n samples with m nodes with k features (3D), or simply as a 2D matrix with n samples with k features, where x and y are separate features.
Just chaining the x and y vectors and doing PCA on that did not really help me - I don't understand what I am doing anymore. I get the basics of PCA, but how to do this for a more complex data set is beyond me. Also, I am not too great at matrix algebra, so a more intuitive explanation is welcome.
The question: Am I doing entirely the wrong thing or is there some way to retrieve shape modes of 25 nodes with an x and y value?

K-means clustering

I want to use K-means clustering for my features which are of size 286 x 276 , So I can do clustering before using SVM. These features are of 16 different gestures. I am using MATLAB function IDX=kmeans(Feat_train,16). In IDX variable I am getting vector of size 286x1 in which their is numbers in between 1-16 randomly. I am not understanding what that number shows and what I have to next for giving input to SVM for training.
The way you invoked kmeans in Matlab with your 286-by-276 feature matrix, kmeans assume you have 286 1D vectors in a 276-dimensional space. kmeans then tries to find k=16 centers best representing your 286 high-dimensional points.
Finally, it gives back IDX: an index per point telling you to which of the 16 centers this point belongs to.
It is now up to you to decide how to feed this information into an SVM machinery.
The number shows which cluster does each 1x276 "point" belongs to.

camera calibration MATLAB toolbox

I have to perform re-projection of my 3D points (I already have data from Bundler).
I am using Camera Calibration toolbox in MATLAB to get the intrinsic camera parameters. I got output like this from 27 images (chess board; images are taken from different angles).
Calibration results after optimization (with uncertainties):
Focal Length: fc = [ 2104.11696 2101.75357 ] ± [ 23.13283 22.92478 ]
Principal point: cc = [ 969.15779 771.30555 ] ± [ 21.98972 15.25166 ]
Skew: alpha_c = [ 0.00000 ] ± [ 0.00000 ]
Distortion: kc = [ 0.11555 -0.55754 -0.00100 -0.00275 0.00000 ] ±
[ >0.05036 0.59076 0.00307 0.00440 0.00000 ]
Pixel error: err = [ 0.71656 0.63306 ]
Note: The numerical errors are approximately three times the standard deviations (for reference).
I am wondering about the numerical errors i.e. Focal length error +- [23.13283 22.92478] , principal point error etc. What these error numbers actually represent and what are their impact??
The pixel error is really less.
So far I use the following matrix from above data for my re-projection:
K=[ 2104.11696 0 969.15779; 0 2101.75357 771.30555;0 0 1]
The above matrix "K" seems right to me. Correct me if I am doing something wrong...
Will be waiting for your replies.
There are two kinds of errors here.
One is the reprojection errors. Once you calibrate a camera, you use the resulting camera parameters to project the checkerboard points in world coordinates into the image. Then the reprojection erros are the distances between those projected points and the detect checkerboard points. The acceptable value for the reprojection errors depends on your application, but a good rule of thumb is that the mean reprojection error should be less than 0.5 of a pixel.
The other kind of errors are those +/- intervals you get for each estimate parameter. Those are based on the standard errors resulting from the optimization algorithm. The values that the Bouguet's
Camera Calibration Toolbox gives you are actually 3 times the standard error, which corresponds to 99.73% confidence interval. In other words, if the Camera Calibration toolbox reports the focal length error as +- [23.13283 22.92478], then the actual focal length is within that interval of your estimate with the probability of 99.73%.
The reprojection errors give you a quick measure of the accuracy of your calibration. The standard errors - let's call them estimation errors - are useful for a more careful analysis of your results. For example, you should try excluding calibration images that have high mean reprojection error. On the other hand, if your estimation errors are high, you can try adding more calibration images.
By the way, the Computer Vision System Toolbox now includes a GUI Camera Calibrator app that makes camera calibration much easier. There is also a good explanation of the reprojection errors in the documentation.
The camera calibration toolbox extract grid points from the checker board images and uses it for finding calibration parameters.
The pixel errors are mean re-projection error for extracted grid points, i.e. the actual pixel location and the one by using calculated K matrix. So these numbers are mostly within 1 (1 pixel error) although your numbers are quite. The error in focal length is variance of calculated focal length.
You need only 3 or 4 images to find calibration of a camera (I forget the actual number). If you provide multiple images, it will compute K for all the combination of 3-4 images and compute a K. The errors are the variance of all these computed K.
Your numbers are quite high (it should be within 3-4 pixels compared to your 22-23 pixels). The reason is bad images for calibration and wrong initial estimate of grid points (this you do manually by selecting 4 corners in image). Also usually f_x and f_y are same in modern cameras and you should take mean of both (f_x + f_y)/2.
Regarding your principle point, it seems your camera resolution in 1920 x 1600 and you should use [980 800] instead of the one given by toolbox. Usually the ccd is placed carefully now-days and you have your principle point exactly at the center.