Cross Correlation in Matlab - matlab

I was asked to implement the cross correlation in Matlab and compare it with the xcorr that Matlab provides.
From what I have searched its seems that cross correlation is similar to convolution but I still don’t fully understand how either of them work, so its impossible to get it down on code.
If somebody has done this before and is willing to share the code with a explanation on how it works is appreciated.
PS: I was told that I cant be using inbuilt functions other than the simple ones.(for, if, etc..)

I am sure you are familiar with this GIF from a convolution:
What do you see there? you calculate the value under two functions (the realtion between them is a multiplication), which is an integral (which in discrete system is a sum of the values inside your integration limits), and you do that for the whole integration limit in one function (so that's one inner loop) in every step of the integration limits of the other function , for the whole integration limit of the second function (nested in a second loop).
So there you have it, a convolution can be programmed as the sum of multiplications of the values of two functions inside two nested loops over the integration limits. For the cross correlation you just change one direction.
Try programming that and come back if it doesn't work. Good luck with your assignment!

Related

Solving Delayed Differential Equations using ode45 Matlab

I am trying to solve DDE using ode45 in Matlab. My question is about the way that I am solving this equation. I don't know if I am right or I am wrong and I should use dde23 instead.
I have a following equation:
xdot(t)=Ax+BU(t-td)+E(t) & U(t-td)=Kx(t-td) & K=constant
Normally, when I don’t have delay on my equation, I solve this using ode45. Now with delay on my equation, again I am using ode45 to get the result. I have the exact amount of U(t-td) at each step and I replace its amount and solve the equation.
Is my solution correct or should I use dde23?
You have two problems here:
ode45 is a solver with adaptive step size. This means that your sampling steps are not necessarily equivalent to the actual integration steps. Instead, the integrator splits a sampling step into several integration steps as needed to achieve the desired accuracy (see this question on Scientific Computing for more information).
As a consequence, you may not provide correct delayed value of U at each step of the integration, even if you believe to do so.
However, if your sampling steps are sufficiently small, you will indeed have one time step per sampling step. The reason for this is that you effectively disable the adaptive integration by making your time step smaller than needed (and thus waste computation time).
Higher-order Runge–Kutta methods such as ode45 do not only make use of the value of the derivative at each integration step, but also evaluate it in-between (and no, they cannot provide a usable solution for this in-between time step).
For example, suppose that your delay and integration step are td=16. To make the integration step from t=32 to t=48, you need to evaluate U not only at t = 32−16 = 16 and t = 48−16 = 32, but also at t = 40−16 = 24. Now, you might say: Okay, let’s integrate such that we have an integration step at all those time points. But for these integration steps, you again need steps in the middle, e.g., if you want to integrate from t=16 to t=24, you need to evaluate U at t=0, t=4, and t=8. You get a never-ending cascades of smaller and smaller time steps.
Due to problem 2, it is impossible to provide the exact states from the past with any but a one-step integrator – using which is probably not a good idea in your case. For this reason, it is inevitable to use some sort of interpolation to obtain past values if you want to integrate DDEs with a multi-step integrator. dde23 does this in a sophisticated way using a good interpolation.
If you only provide U at the integration steps, you are essentially performing a piecewise-constant interpolation, which is the worst possible interpolation and therefore requires you to use very small integration steps. While you can do this if you really want to, dde23 with its more sophisticated piecewise cubic Hermite interpolation can work with much larger time steps and integrate adaptively, and therefore will be much faster. Also, it’s less likely that you somehow make a mistake. Finally, dde23 can deal with very small delays (smaller than the integration step), if you’re into that sort of thing.

Implementing an equation involving integrals as a filter

This is a question that possibly borders on the intersection of the general usage of MATLAB and/or signal processing. Thought I would first ask the question in a MATLAB forum before trying signal processing.
So our lecturer read out his notes/paper and said the equation
could be implemented as a filter.
At first, it seemed difficult to follow the idea but when realizing that integration is same as finding areas under the curve which seems similar to applying a low pass filter so that only the portion of the signal under the threshold is allowed to pass through, it made a bit of sense. But how - meaning to say which function - can I use to implement the above equation? Do I need three filters or can I use just one? How do I use the terms preceding the integrals in the filter?
Thanks in advance

alternatives to lsqlin MATLAB

Ok so I have a script that runs among other things lsqlin optimization function millions of times. To speed up this code I "codegen" it (basically automatically creates some mex files). This is a followup of Linear systems of inequations.
The problem here is that lsqlin as well as other optimization functions are not transformed and need to be called externally, which leads to loss of efficient.
I already found the MINQ toolbox but could not understand how to translate from lsqlin to this. Also found the QPC toolbox which requires a licence, which I am currently waiting.
Does anyone suggest another toolbox and how to convert from lsqlin to that?
General idea to codegen a lsqlin script (as can be seen a link is called and not a full conversion).
CODE:
function main_script()
coder.extrinsic('lsqlin_script')
for i=1:10^7
X=lsqlin_script(A,b,X0)
...
end
end
function X=lsqlin_script(A,b,X0)
X=lsqlin(eye(2),X0, A, b,[],[],[],[],X0, optimoptions('lsqlin','Display','Off'));
end
RUN:
codegen main_script.m
main_script_mex(INPUTS)
If you would describe your original problem I think you could expect more answers.
A possible approach to avoid lsqlin:
Calculate the orthogonal projection of Pxyz onto every plane defined by A and b.
Check whether the projections satisfy the inequality requirements. From those which satisfy select the closest point to Pxyz. If no valid point is found then the closest point will be on the intersection of planes. Calculate the shortest distance from Pxyz to every intersection lines... follow the steps applied for projection on planes..
As you can see it is not fully elaborated, you should work out the details if you think it may solve your problem.
For these calculations you do not need optimization function.

Why does GlobalSearch return different solutions each run?

When running the GlobalSearch solver on a nonlinear constrained optimization problem I have, I often get very different solutions each run. For the cases that I have an analytical solution, the numerical results are less dispersed than the non-analytical cases but are still different each run. It would be nice to get the same results at least for these analytical cases so that I know the optimization routine is working properly. Is there a good explanation of this in the Global Optimization Toolbox User Guide that I missed?
Also, why does GlobalSearch use a different number of local solver runs each run?
Thanks!
A full description of how the GlobalSearch algorithm works can be found Here.
In summary the GlobalSearch method iteratively performs convex optimization. Basically it starts out by using fmincon to search for a local minimum near the initial conditions you have provided. Then a bunch of "trial points", based on how good the initial result was, are generated using the "scatter search algorithm." Then there is some more convex optimization and rating of "how good" the minima around these points are.
There are a couple of things that can cause the algorithm give you different answers:
1. Changing the initial conditions you give it
2. The scatter search algorithm itself
The fact that you are getting different answers each time likely means that your function is highly non-convex. The best thing that I know of that you can do in this scenario is just to try the optimization algorithm at several different initial conditions and see what result you get back the most frequently.
It also looks like there is something called the 'PlotFcns' property which would allow you get a better idea what the functions the solver is generating for you look like.
You can use the ga or gamulti objective functions within the GlobalSearch api. I would recommend this. Convex optimizers wont be able to solve a non-linear problem. Even then Genetic Algorithms dont gaurantee the solution. If you run the ga and then use its final minimum as the start of your fmincon search then it should result in the same answer consistently. There may be better ones but if the search space is unknown you may never know.

Functional form of 2D interpolation in Matlab

I need to construct an interpolating function from a 2D array of data. The reason I need something that returns an actual function is, that I need to be able to evaluate the function as part of an expression that I need to numerically integrate.
For that reason, "interp2" doesn't cut it: it does not return a function.
I could use "TriScatteredInterp", but that's heavy-weight: my grid is equally spaced (and big); so I don't need the delaunay triangularisation.
Are there any alternatives?
(Apologies for the 'late' answer, but I have some suggestions that might help others if the existing answer doesn't help them)
It's not clear from your question how accurate the resulting function needs to be (or how big, 'big' is), but one approach that you could adopt is to regress the data points that you have using a least-squares or Kalman filter-based method. You'd need to do this with a number of candidate function forms and then choose the one that is 'best', for example by using an measure such as MAE or MSE.
Of course this requires some idea of what the form underlying function could be, but your question isn't clear as to whether you have this kind of information.
Another approach that could work (and requires no knowledge of what the underlying function might be) is the use of the fuzzy transform (F-transform) to generate line segments that provide local approximations to the surface.
The method for this would be:
Define a 2D universe that includes the x and y domains of your input data
Create a 2D fuzzy partition of this universe - chosing partition sizes that give the accuracy you require
Apply the discrete F-transform using your input data to generate fuzzy data points in a 3D fuzzy space
Pass the inverse F-transform as a function handle (along with the fuzzy data points) to your integration function
If you're not familiar with the F-transform then I posted a blog a while ago about how the F-transform can be used as a universal approximator in a 1D case: http://iainism-blogism.blogspot.co.uk/2012/01/fuzzy-wuzzy-was.html
To see the mathematics behind the method and extend it to a multidimensional case then the University of Ostravia has published a PhD thesis that explains its application to various engineering problems and also provides an example of how it is constructed for the case of a 2D universe: http://irafm.osu.cz/f/PhD_theses/Stepnicka.pdf
If you want a function handle, why not define f=#(xi,yi)interp2(X,Y,Z,xi,yi) ?
It might be a little slow, but I think it should work.
If I understand you correctly, you want to perform a surface/line integral of 2-D data. There are ways to do it but maybe not the way you want it. I had the exact same problem and it's annoying! The only way I solved it was using the Surface Fitting Tool (sftool) to create a surface then integrating it.
After you create your fit using the tool (it has a GUI as well), it will generate an sftool object which you can then integrate in (2-D) using quad2d
I also tried your method of using interp2 and got the results (which were similar to the sfobject) but I had no idea how to do a numerical integration (line/surface) with the data. Creating thesfobject and then integrating it was much faster.
It was the first time I do something like this so I confirmed it using a numerically evaluated line integral. According to Stoke's theorem, the surface integral and the line integral should be the same and it did turn out to be the same.
I asked this question in the mathematics stackexchange, wanted to do a line integral of 2-d data, ended up doing a surface integral and then confirming the answer using a line integral!