Multi-objective Optimization of Norm Functions - matlab

I am trying to perform a multi-objective optimization using MATLAB on two functions involving norms (shown in the image). The first function, f1(v), is to be minimized and involves the 0-norm. The second function, f2(v), is to be maximized and involves the 2-norm. Both u and v are vectors, however, u is fixed. The product vti . u produces a scalar. I am interested in generating the Pareto front for those two functions given a list of 10,000 different random vectors vi. Suggestions appreciated.

Related

Nonlinear curve fitting of a matrix function in python

I have the following problem. I have a N x N real matrix called Z(x; t), where x and t might be vectors in general. I have N_s observations (x_k, Z_k), k=1,..., N_s and I'd like to find the vector of parameters t that better approximates the data in the least square sense, which means I want t that minimizes
S(t) = \sum_{k=1}^{N_s} \sum_{i=1}^{N} \sum_{j=1}^N (Z_{k, i j} - Z(x_k; t))^2
This is in general a non-linear fitting of a matrix function. I'm only finding examples in which one has to fit scalar functions which are not immediately generalizable to a matrix function (nor a vector function). I tried using the scipy.optimize.leastsq function, the package symfit and lmfit, but still I don't manage to find a solution. Eventually, I'm ending up writing my own code...any help is appreciated!
You can do curve-fitting with multi-dimensional data. As far as I am aware, none of the low-level algorithms explicitly support multidimensional data, but they do minimize a one-dimensional array in the least-squares sense. And the fitting methods do not really care about the "independent variable(s)" x except in that they help you calculate the array to be minimized - perhaps to calculate a model function to match to y data.
That is to say: if you can write a function that would take the parameter values and calculate the matrix to be minimized, just flatten that 2-d (on n-d) array to one dimension. The fit will not mind.

Partial Differentiation

I wanted to know command for differentiating a complex Function. I am having trouble formulating the function I want to partially differentiate with respect to Q and then get just optimal Q. fi,Xi, li, Yi ,Qfi and Qsi are the decision Variables. All of this is for ith supplier. One issue I am getting in multiplying 2 variables together for eg. fiSfiXi, how to do that?

Simulink 3D lookup table

I have a system of three nonlinear equations with eight unknowns. I'm currently setting each equation equal to a desired value and then using Matlab's fsolve (a numerical solver) to find a solution. Instead of running fsolve in real-time, I'd like to pre-compute solutions for a specific set of values to which I set the equations equal.
Pursuant that goal, I've run the solver over a set of values and created a 3D matrix (N x N x N) which I've attempted to load into eight Simulink 3-D lookup tables, Direct Lookup Table n-D block, so I can fetch each of the eight solved unknowns. It's my understanding the inputs to this block should work the same way I would reference an element in my 3-D array: table(x,y,z) but I'm constantly getting Simulink table input out-of-range errors. I've confirmed the inputs are within the table size, so I'm not sure what's wrong.
This isn't the most elegant implementation, so I'm open to better solutions. Ideally, I'd like to have a Simulink lookup that takes three inputs and returns a vector of the eight solved unknowns, or even better, can do some type of linear interpolation between the three lookup values to return an approximate solution.
Thanks!

how to solve first order of system of PDEs in Matlab

I have a set of 4 PDEs:
du/dt + A(u) * du/dx = Q(u)
where,u is a matrix and contains:
u=[u1;u2;u3;u4]
and A is a 4*4 matrix. Q is 4*1. A and Q are function of u=[u1;u2;u3;u4].
But my questions are:
How can I solve above equation in MATLAB?
If I solved it by PDE functions of Matlab,can I convert it to a
simple function that is not used from ready functions of Matlab?
Is there any way that I calculate A and Q explicitly. I mean that in
every time step, I calculate A and Q from data of previous time step
and put new value in the equation that causes faster run of program?
PDEs require finite differences, finite elements, boundary elements, etc. You can also turn them into ODEs using transforms like Laplace, Fourier, etc. Solve those using ODE functions and then transform back. Neither one is trivial.
Your equation is a non-linear transient diffusion equation. It's a parabolic PDE.
The equation you posted has the additional difficulty of being non-linear, because both the A matrix and Q vector are functions of the independent variable q. You'll have to start by linearizing your equations. Solve for increments in u rather than u itself.
Once you've done that, discretize the du/dx term using finite differences, finite elements, or boundary elements. You should start with a weighted residual integral formulation.
You're almost done: Next to integrate w.r.t. time using the method of your choice.
It's not trivial.
Google found this: maybe it will help you.
http://www.mathworks.com/matlabcentral/fileexchange/3710-nonlinear-diffusion-toolbox

Minimizing error of a formula in MATLAB (Least squares?)

I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB