There's a toolbox function for the curve fitting toolbox called cftool that lets you fit curves to 1-d data. Is there anything for 2-d data?
Jerry suggested two very good choices. There are other options though, if you want a more formulaic form for the model.
The curvefitting toolbox, in the current version, allows you to fit surfaces to data, not just curves.
Or fit a 2-d polynomial model, using a tool like polyfitn.
Or you can use a nonlinear regression, if you have a model in mind. The optimization toolbox will help you there, with lsqnonlin or lsqcurvefit, either of which can fit 2-d (or higher) models. Or, if you have the stats toolbox, then try nlinfit.
Perhaps you might like a tool to fit Radial Basis Functions.
Neural nets are another way to fit data, so use the Neural Network Toolbox
So there are many ways to model surfaces, depending on your interests, your knowledge of a likely form for the model, what toolboxes you have or what you might choose to download. A very big factor in your model choice are your goals for the model. What will you do with it? How will it be used?
You seem to be looking for griddata. You might also want to look at gridfit.
Related
I wanted to ask if there is a framework for non-linear models in Matlab, similar to that for linear dynamical systems (ss, lsim, connect, etc.). I need to create some examples for a control theory lecture and would like to compare linear and non-linear systems.
For example, I've got a nonlinear model of a simple inverted pendulum, and functions for the jacobians of the dynamics and the output map. Now I implement some controllers (state feedback, LQR, ...) at a stationary point and compare the linear and non-linear closed-loop (using ode-solvers like ode15s). I know how to do it for one single model, but i'd like to easily switch the models and stationary points. I already made some simple data structures and functions for automated plotting, controller synthesis and interconnections.
However, this took me quite some time and it's not easily adapted to other models with possibly different dimensions. So I wondered if there was an already implemented framework similar to what exists for linear systems with functions like lsim for calculating responses or connect for creating interconnections, where you can give names to inputs and outputs etc.
I'm happy about any hints on how to work with nonlinear models more efficiently :)
I have a curve which looks roughly / qualitative like the curves displayed in those 3 images.
The only thing I know is that the first part of the curve is hardware-specific supposed to be a linear curve and the second part is some sort of logarithmic part (might be a combination of two logarithmic curves), i.e. linlog camera. But I couldn't tell the mathematic structure of the equation, e.g. wether it looks like a*log(b)+c or a*(log(c+b))^2 etc. Is there a way to best fit/find out a good regression for this type of curve and is there a certain way to do this specifically in MATLAB? :-) I've got the student version, i.e. all toolboxes etc.
fminsearch is a very general way to find best-fit parameters once you have decided on a parametric equation. And the optimization toolbox has a range of more-sophisticated ways.
Comparing the merits of one parametric equation against another, however, is a deep topic. The main thing to be aware of is that you can always tweak the equation, adding another term or parameter or whatever, and get a better fit in terms of lower sum-squared-error or whatever other goodness-of-fit metric you decide is appropriate. That doesn't mean it's a good thing to keep adding parameters: your solution might be becoming overly complex. In the end the most reliable way to compare how well two different parametric models are doing is to cross-validate: optimize the parameters on a subset of the data, and evaluate only on data that the optimization procedure has not yet seen.
You can try the "function finder" on my curve fitting web site zunzun.com and see what it comes up with - it is free. If you have any trouble please email me directly and I'll do my best to help.
James Phillips
zunzun#zunzun.com
Is there any way I can convert the SURFpoints object, generated by matlab, into a matrix with x and y positions, for feeding into a neural network?
I am a pretty much complete beginner, but from what I can tell, and by looking at documentation, I wasn't sure if there was a way to get SURFpoints into neural networks?
Many thanks,
Hugh
SURFPoints has a field, Location, that is an n x 2 matrix that has the (x,y) coordinates of each SURF point detected in the image.
Note, however, that SURF points have other attributes beside their location (such as scale and orientation). If you only take into account the (x,y) locations, you are throwing away a lot of data.
Also, it's unclear how you would feed this information into a neural network. A neural network, like many other machine learning models, expects a uniform length feature vector of an entity. If your task is something like image classification, you'll have to come up with some way to convert the list of SURF points into a feature vector that captures the properties you want your classifier to care about. Depending on your application, a neural network may or may not be the best way to go. In the context of computer vision and image processing, neural networks these days are more commonly used for unsupervised feature discovery (see "deep learning"). For supervised learning tasks, other models like boosted decision trees and SVMs give better theoretical guarantees and have fared much better in practice.
I am studying Support Vector Machines (SVM) by reading a lot of material. However, it seems that most of it focuses on how to classify the input 2D data by mapping it using several kernels such as linear, polynomial, RBF / Gaussian, etc.
My first question is, can SVM handle high-dimensional (n-D) input data?
According to what I found, the answer is YES!
If my understanding is correct, n-D input data will be
constructed in Hilbert hyperspace, then those data will be
simplified by using some approaches (such as PCA ?) to combine it together / project it back to 2D plane, so that
the kernel methods can map it into an appropriate shape such a line or curve can separate it into distinguish groups.
It means most of the guides / tutorials focus on step (3). But some toolboxes I've checked cannot plot if the input data greater than 2D. How can the data after be projected to 2D?
If there is no projection of data, how can they classify it?
My second question is: is my understanding correct?
My first question is, does SVM can handle high-dimensional (n-D) input data?
Yes. I have dealt with data where n > 2500 when using LIBSVM software: http://www.csie.ntu.edu.tw/~cjlin/libsvm/. I used linear and RBF kernels.
My second question is, does it correct my understanding?
I'm not entirely sure on what you mean here, so I'll try to comment on what you said most recently. I believe your intuition is generally correct. Data is "constructed" in some n-dimensional space, and a hyperplane of dimension n-1 is used to classify the data into two groups. However, by using kernel methods, it's possible to generate this information using linear methods and not consume all the memory of your computer.
I'm not sure if you've seen this already, but if you haven't, you may be interested in some of the information in this paper: http://pyml.sourceforge.net/doc/howto.pdf. I've copied and pasted a part of the text that may appeal to your thoughts:
A kernel method is an algorithm that depends on the data only through dot-products. When this is the case, the dot product can be replaced by a kernel function which computes a dot product in some possibly high dimensional feature space. This has two advantages: First, the ability to generate non-linear decision boundaries using methods designed for linear classifiers. Second, the use of kernel functions allows the user to apply a classifier to data that have no obvious fixed-dimensional vector space representation. The prime example of such data in bioinformatics are sequence, either DNA or protein, and protein structure.
It would also help if you could explain what "guides" you are referring to. I don't think I've ever had to project data on a 2-D plane before, and it doesn't make sense to do so anyway for data with a ridiculous amount of dimensions (or "features" as it is called in LIBSVM). Using selected kernel methods should be enough to classify such data.
I have a problem where I am fitting a high-order polynomial to (not very) noisy data using linear least squares. Currently I'm using polynomial orders around 15 - 25, which work surprisingly well: The dependence is very nearly linear, but the accuracy of modelling the 'very nearly' is critical. I'm using Matlab's polyfit() function, and (obviously) normalising the x-data. This generally works fine, but I have come across an issue with some recent datasets. The fitted polynomial has extrema within the x-data interval. For the application I'm working on this is a non-no. The polynomial model must have no stationary points over the x-interval.
So I need to add a constraint to the least-squares problem: the derivative of the fitted polynomial must be strictly positive over a known x-range (or strictly negative - this depends on the data but a simple linear fit will quickly tell me which it is.) I have had a quick look at the available optimisation toolbox functions, but I admit I'm at a loss to know how to go about this. Does anyone have any suggestions?
[I appreciate there are probably better models than polynomials for this data, but in the short term it isn't feasible to change the form of the model]
[A closing note: I have finally got the go-ahead to replace this awful polynomial model! I am going to adopt a nonparametric approach, spline smoothing, using the excellent SPLINEFIT code by Jonas Lundgren. This has the advantage that I'm already using a spline model in the end-user application, so I already have C# code available to evaluate a spline model]
You could use cftool and use the exclude data points option.