finding the global maximum of an unknown surface - matlab

I have a model that is solved, returns a single output value and plots it. From those values, I plot a surface using x-values varying from 1-35 and y-values varying from 1-39, and have the returned values as values on the z-axis. See below.
This figure does not behave according to a defined function, it is simply a plot of output values.
I've been trying to use a random optimization algorithm that I created in an attempt to find the global maximum, but it takes a very long time and isn't always correct (when compared to a grid-search algorithm that I use as a comparison). The surface that is created has subtle changes in it, enough to create multiple troublesome local minima and maxima. I'm looking for a way to find the global maximum of this non-convex surface in a relatively quick fashion.
EDIT:
35-by-39 is the search area and that's as big as it gets. The values of the x and y axes are the input values of the model (probably shouldve mentioned that), so each of the z-values are associated with an x and y input coordinate. And my initial guess is usually smack dab in the middle of the search area.
The creation of this figure took about 50 minutes, because each of the 1365 z-values takes about 3 seconds to compute. I'd like to do this without having to use exhaustive enumeration (evaluating every point for a z-value). I'd like this to take around 5 minutes instead of 50.
EDIT(2):
Sorry for the confusion. The figure below is a 35-by-39 grid of z-values and is used purely for reference. In the actual executing of the program, all I have is the x- and y-coordinates, and I am trying to find the global maximum z-value in the fewest function evaluations possible in order to save time. So horchler, in reference to your comment, the latter.
EDIT(3):
The thing with this figure is its only a single example. There are multiple different figures that are formed when I use data from a separate source (i.e. the left side might be uninteresting in this example, but for a separate set of data, it may or may not contain the global max). And this adds to the complexity. It is impossible to tell from the data where the location of the global max will be.
Some surfaces are incredibly smooth, others have large and frequent peaks throughout.

Related

Automatically truncating a curve to discard outliers in matlab

I am generation some data whose plots are as shown below
In all the plots i get some outliers at the beginning and at the end. Currently i am truncating the first and the last 10 values. Is there a better way to handle this?
I am basically trying to automatically identify the two points shown below.
This is a fairly general problem with lots of approaches, usually you will use some a priori knowledge of the underlying system to make it tractable.
So for instance if you expect to see the pattern above - a fast drop, a linear section (up or down) and a fast rise - you could try taking the derivative of the curve and looking for large values and/or sign reversals. Perhaps it would help to bin the data first.
If your pattern is not so easy to define but you are expecting a linear trend you might fit the data to an appropriate class of curve using fit and then detect outliers as those whose error from the fit exceeds a given threshold.
In either case you still have to choose thresholds - mean, variance and higher order moments can help here but you would probably have to analyse existing data (your training set) to determine the values empirically.
And perhaps, after all that, as Shai points out, you may find that lopping off the first and last ten points gives the best results for the time you spent (cf. Pareto principle).

MATLAB: How can I get griddata() to cache things to run faster for many identical interpolations?

I'm using the griddata() command in MATLAB to go from a spherical grid of sizes on the order (128x256x1500) to a Cartesian cubic grid, centered on the sphere and containing N^3 regularly-spaced points (where N is between 128 and 512). I need to do this for dozens or hundreds of checkpoints in my simulation, and several variables per checkpoint. I'm going to need to interpolate from the same spherical grid to the same cubic grid several hundred or several thousand times over, using new data on the spherical grid each time!
Since the most computationally expensive part of this routine is the triangulation and interpolation, I would like to cache some information the first time the routine is run and use that information for subsequent runs.
I think I could probably cache a table of vertex indices and associated interpolation weights for every point on the cubic grid, but I'm not sure how/where to do this....
As far as I can tell, this is not possible using the current implementation of griddata(). Is there any way I could do something like this -- perhaps re-writing the griddata() routine?

Putative correspondences

I am trying to implement the algorithm for estimating the fundamental matrix between two images using RANSAC. So far I have found the interest points using Harris corner detection. I am stuck at computing the putative correspondences using these interest points. I don't want to use matlab toolbox for that , I like to know a way to learn about corresponding point extraction from two images and it's implementation. I have read about block matching but have not completely understood the concept of it. Any samples and guidelines would help me to understand this problem better.
Thanks in advance.
There are many ways to search for corresponding interest points, but they're usually based on describing each of these interest points using the characteristics of the image around them, and, for each point in one image, comparing its surrounding's characteristics to the characteristics of the surroundings of other interest points in the other image.
Now assume you've decided to consider only a squared region (a block) around each point of interest that contains the intensity values of the image around the point. Now you can compare these blocks, and match those that are close to each other. The problem is now how to define "close" or, in other words, how to define the distance metric you'll use to compare these blocks.
There are many approaches, for example, you could use the sum-of-absolute-differences between two blocks, which means you could subtract two blocks, take the absolute value of the resulting block, and then sum all values in this resulting block, obtaining a scalar value which represents how close these blocks are. If this distance is less than a given threshold, you can consider the two blocks a match. This is basically what block matching does.
Similarly, you could define other types of regions to describe your points of interes, for example by changing their shapes, sizes, orientations etc, and create more complex descriptors for these points of interest, which might capture more distinguishable characteristics (which is highly desired if you have the purpose of matching them later).
If you want to learn more about the topic, I think this presentation can get you started:
http://courses.cs.washington.edu/courses/cse455/09wi/Lects/lect6.pdf

Different results for Fundamental Matrix in Matlab

I am implementing stereo matching and as preprocessing I am trying to rectify images without camera calibration.
I am using surf detector to detect and match features on images and try to align them. After I find all matches, I remove all that doesn't lie on the epipolar lines, using this function:
[fMatrix, epipolarInliers, status] = estimateFundamentalMatrix(...
matchedPoints1, matchedPoints2, 'Method', 'RANSAC', ...
'NumTrials', 10000, 'DistanceThreshold', 0.1, 'Confidence', 99.99);
inlierPoints1 = matchedPoints1(epipolarInliers, :);
inlierPoints2 = matchedPoints2(epipolarInliers, :);
figure; showMatchedFeatures(I1, I2, inlierPoints1, inlierPoints2);
legend('Inlier points in I1', 'Inlier points in I2');
Problem is, that if I run this function with the same data, I am still getting different results causing differences in resulted disparity map in each run on the same data
Pulatively matched points are still the same, but inliners points differs in each run.
Here you can see that some matches are different in result:
UPDATE: I thought that differences was caused by RANSAC method, but using LMedS, MSAC, I am still getting different results on the same data
EDIT: Admittedly, this is only a partial answer, since I am only explaining why this is even possible with these fitting methods and not how to improve the input keypoints to avoid this problem from the start. There are problems with the distribution of your keypoint matches, as noted in the other answers, and there are ways to address that at the stage of keypoint detection. But, the reason the same input can yield different results for repeated executions of estimateFundamentalMatrix with the same pairs of keypoints is because of the following. (Again, this does not provide sound advice for improving keypoints so as to solve this problem).
The reason for different results on repeated executions, is related to the the RANSAC method (and LMedS and MSAC). They all utilize stochastic (random) sampling and are thus non-deterministic. All methods except Norm8Point operate by randomly sampling 8 pairs of points at a time for (up to) NumTrials.
But first, note that the different results you get for the same inputs are not equally suitable (they will not have the same residuals) but the search space can easily lead to any such minimum because the optimization algorithms are not deterministic. As the other answers rightly suggest, improve your keypoints and this won't be a problem, but here is why the robust fitting methods can do this and some ways to modify their behavior.
Notice the documentation for the 'NumTrials' option (ADDED NOTE: changing this is not the solution, but this does explain the behavior):
'NumTrials' — Number of random trials for finding the outliers
500 (default) | integer
Number of random trials for finding the outliers, specified as the comma-separated pair consisting of 'NumTrials' and an integer value. This parameter applies when you set the Method parameter to LMedS, RANSAC, MSAC, or LTS.
MSAC (M-estimator SAmple Consensus) is a modified RANSAC (RANdom SAmple Consensus). Deterministic algorithms for LMedS have exponential complexity and thus stochastic sampling is practically required.
Before you decide to use Norm8Point (again, not the solution), keep in mind that this method assumes NO outliers, and is thus not robust to erroneous matches. Try using more trials to stabilize the other methods (EDIT: I mean, rather than switching to Norm8Point, but if you are able to back up in your algorithms then address the the inputs -- the keypoints -- as a first line of attack). Also, to reset the random number generator, you could do rng('default') before each call to estimateFundamentalMatrix. But again, note that while this will force the same answer each run, improving your key point distribution is the better solution in general.
I know its too late for your answer, but I guess it would be useful for someone in the future. Actually, the problem in your case is two fold,
Degenerate location of features, i.e., The location of features is mostly localized (on you :P) and not well-spread throughout the image.
These matches are sort of on the same plane. I know you would argue that your body is not planar, but comparing it to the depth of the room, it sort of is.
Mathematically, this means you are kind of extracting E (or F) from a planar surface, which always has infinite solutions. To sort this out, I would suggest using some constrain on distance between any two extracted SURF features, i.e., any two SURF features used for matching should be at least 40 or 100 pixels apart (depending on the resolution of your image).
Another way to get better SURF features is to set 'NumOctaves' in detectSURFFeatures(rgb2gray(I1),'NumOctaves',5); to larger values.
I am facing the same problem and this has helped (a little bit).

Process for comparing two datasets

I have two datasets at the time (in the form of vectors) and I plot them on the same axis to see how they relate with each other, and I specifically note and look for places where both graphs have a similar shape (i.e places where both have seemingly positive/negative gradient at approximately the same intervals). Example:
So far I have been working through the data graphically but realize that since the amount of the data is so large plotting each time I want to check how two sets correlate graphically it will take far too much time.
Are there any ideas, scripts or functions that might be useful in order to automize this process somewhat?
The first thing you have to think about is the nature of the criteria you want to apply to establish the similarity. There is a wide variety of ways to measure similarity and the more precisely you can describe what you want for "similar" to mean in your problem the easiest it will be to implement it regardless of the programming language.
Having said that, here is some of the thing you could look at :
correlation of the two datasets
difference of the derivative of the datasets (but I don't think it would be robust enough)
spectral analysis as mentionned by #thron of three
etc. ...
Knowing the origin of the datasets and their variability can also help a lot in formulating robust enough algorithms.
Sure. Call your two vectors A and B.
1) (Optional) Smooth your data either with a simple averaging filter (Matlab 'smooth'), or the 'filter' command. This will get rid of local changes in velocity ("gradient") that appear to be essentially noise (as in the ascending component of the red trace.
2) Differentiate both A and B. Now you are directly representing the velocity of each vector (Matlab 'diff').
3) Add the two differentiated vectors together (element-wise). Call this C.
4) Look for all points in C whose absolute value is above a certain threshold (you'll have to eyeball the data to get a good idea of what this should be). Points above this threshold indicate highly similar velocity.
5) Now look for where a high positive value in C is followed by a high negative value, or vice versa. In between these two points you will have similar curves in A and B.
Note: a) You could do the smoothing after step 3 rather than after step 1. b) Re 5), you could have a situation in which a 'hill' in your data is at the edge of the vector and so is 'cut in half', and the vectors descend to baseline before ascending in the next hill. Then 5) would misidentify the hill as coming between the initial descent and subsequent ascent. To avoid this, you could also require that the points in A and B in between the two points of velocity similarity have high absolute values.