Octave equivalent of MATLAB ltitr.m function - matlab

I am trying to get some MATLAB scripts to run in Octave, but have a problem with the following line of code:
x = ltitr( a, b, u, x0 ) ;
which throws an error in Octave.
Online research shows that the ltitr function is an internal MATLAB function that returns the Linear time-invariant time response kernel for the given inputs. This sounds as if it should be a common DSP requirement, so I feel that this must be implemented directly in Octave, or in the latest Control package from Source Forge. However, I can't seem to find an Octave equivalent. I've read the documentation for the latest Octave Control package and maybe I should be using the functions lsim.m or ss.m or dss.m or impulse.m but I'm not really sure.
Can anyone enlighten me? If it's not implemented in Octave, maybe some online reference to code that I could use to write my own ltitr function?

If you actually type in help ltitr in the MATLAB command prompt, you come up with this documentation:
%LTITR Linear time-invariant time response kernel.
%
% X = LTITR(A,B,U) calculates the time response of the
% system:
% x[n+1] = Ax[n] + Bu[n]
%
% to input sequence U. The matrix U must have as many columns as
% there are inputs u. Each row of U corresponds to a new time
% point. LTITR returns a matrix X with as many columns as the
% number of states x, and with as many rows as in U.
%
% LTITR(A,B,U,X0) can be used if initial conditions exist.
% Here is what it implements, in high speed:
%
% for i=1:n
% x(:,i) = x0;
% x0 = a * x0 + b * u(i,:).';
% end
% x = x.';
% Copyright 1984-2007 The MathWorks, Inc.
% $Revision: 1.1.6.4 $ $Date: 2007/05/23 18:54:41 $
% built-in function
As such, they pretty much already give you the code for it. However, I'm assuming that this is written in MEX and so that's why it's built-in and is super fast. As such, if you want to port this over to Octave, you just have to use the code they reference above. It won't be as fast as MATLAB's version though, but that for loop is essentially the basic way to implement it.
However, for the sake of completeness, let's write our own Octave function for it:
function x = ltitr(A, B, U, x0)
%// Number of rows in U is the number of time points
num_points = size(U, 1);
%// Number of columns in U is how many inputs we have
num_inputs = size(U, 2);
x = zeros(num_inputs, num_points); %// Pre-allocate output
%// For each time point we have ...
for idx = 1 : num_points
x(:,idx) = x0; %// Output the corresponding time point
x0 = A*x0 + B*U(idx,:).'; %// Compute next time point
end
x = x.';
The above function is almost similar to the code you see in MATLAB, but there are some additional steps I did, such as pre-allocating the matrix for efficiency, as well as getting some relevant variables to help with the computation. Also, take note that the output matrix x is defined with the dimensions flipped. The reason why is because the computation of the output at each point in time is more easier to compute in this state. If you don't, then you'll have to do unnecessary transposing for the other variables defined in the x0 = ... statement. As such, it's easier to do the calculations in the transposed matrix. When you're done, you transpose the resulting matrix to give you the final output matrix x.
The default state for x0 I'm assuming is going to be all zeroes, so if you want to use this as the default state, specify x0 = zeros(n,1); where n is the total number of inputs for your LTI system.

Related

spline interpolation and its (exact) derivatives

Suppose I have the following data and commands:
clc;clear;
t = [0:0.1:1];
t_new = [0:0.01:1];
y = [1,2,1,3,2,2,4,5,6,1,0];
p = interp1(t,y,t_new,'spline');
plot(t,y,'o',t_new,p)
You can see they work quite fine, in the sense interpolating function matches the data points at the nodes fine. But my problem is, I need to compute the exact derivative of y (i.e., p function) w.r.t. time and plot it against the t vector. How can it be done? I shall not use diff commands, because I need to make sure the derivative function has the same length as t vector. Thanks a lot.
Method A: Using the derivative
This method calculates the actual derivative of the polynomial. If you have the curve fitting toolbox you can use:
% calculate the polynominal
pp = interp1(t,y,'spline','pp')
% take the first order derivative of it
pp_der=fnder(pp,1);
% evaluate the derivative at points t (or any other points you wish)
slopes=ppval(pp_der,t);
If you don't have the curve fitting toolbox you can replace the fnderline with:
% piece-wise polynomial
[breaks,coefs,l,k,d] = unmkpp(pp);
% get its derivative
pp_der = mkpp(breaks,repmat(k-1:-1:1,d*l,1).*coefs(:,1:k-1),d);
Source: This mathworks question. Thanks to m7913d for linking it.
Appendix:
Note that
p = interp1(t,y,t_new,'spline');
is a shortcut for
% get the polynomial
pp = interp1(t,y,'spline','pp');
% get the height of the polynomial at query points t_new
p=ppval(pp,t_new);
To get the derivative we obviously need the polynomial and can't just work with the new interpolated points. To avoid interpolating the points twice which can take quite long for a lot of data, you should replace the shortcut with the longer version. So a fully working example that includes your code example would be:
t = [0:0.1:1];
t_new = [0:0.01:1];
y = [1,2,1,3,2,2,4,5,6,1,0];
% fit a polynomial
pp = interp1(t,y,'spline','pp');
% get the height of the polynomial at query points t_new
p=ppval(pp,t_new);
% plot the new interpolated curve
plot(t,y,'o',t_new,p)
% piece-wise polynomial
[breaks,coefs,l,k,d] = unmkpp(pp);
% get its derivative
pp_der = mkpp(breaks,repmat(k-1:-1:1,d*l,1).*coefs(:,1:k-1),d);
% evaluate the derivative at points t (or any other points you wish)
slopes=ppval(pp_der,t);
Method B: Using finite differences
A derivative of a continuous function is at its base just the difference of f(x) to f(x+infinitesimal difference) divided by said infinitesimal difference.
In matlab, eps is the smallest difference possible with a double precision. Therefore after each t_new we add a second point which is eps larger and interpolate y for the new points. Then the difference between each point and it's +eps pair divided by eps gives the derivative.
The problem is that if we work with such small differences the precision of the output derivatives is severely limited, meaning it can only have integer values. Therefore we add values slightly larger than eps to allow for higher precisions.
% how many floating points the derivatives can have
precision = 10;
% add after each t_new a second point with +eps difference
t_eps=[t_new; t_new+eps*precision];
t_eps=t_eps(:).';
% interpolate with those points and get the differences between them
differences = diff(interp1(t,y,t_eps,'spline'));
% delete all differences wich are not between t_new and t_new + eps
differences(2:2:end)=[];
% get the derivatives of each point
slopes = differences./(eps*precision);
You can of course replace t_new with t (or any other time you want to get the differential of) if you want to get the derivatives at the old points.
This method is slightly inferior to method a) in your case, as it is slower and a bit less precise. But maybe it's useful to somebody else who is in a different situation.

How can I make overloaded function that accepts both matrix and individual parameters?

As a homework given by my Robotics subject, I'm supposed to make functions that convert between coordinate systems. Specifically we are supposed to do this:
Function parameters are Xo = mySYSTEM1toSYSTEM2(Xi), where:
Xi is matrix 3 * N.
Every column presents one point, elements of the column are then the coordinates of that point in SYSTEM1.
I know the equations for conversion but I need help with matlab syntax. I was imagining something like this:
%Takes matrix as arguments
% - loops through it and calls mySphericToCarthesian(alpha, beta, ro) for every column
function [Xo] mySphericToCarthesian[Xi]
%Converts one spheric point to another
function [x, y, z] mySphericToCarthesian[alpha, beta, ro]
Matlab doesn't seem to like this.
How can I establish these two function so that I can actually start with the homework itself?
Well, probably the simplest option would be to define two different functions. In Matlab, function overloading shadows all functions but one (i.e. only one function with a given name can be used at a time), which is not very interesting in practice.
However, if you absolutely want only one function with two different behaviors, then it can be achieved by using the nargin and nargout variables. Inside a function, they indicate the number of inputs and outputs specified by the calling script/function. You can also use varargin, which places all the inputs in a cell.
In practice, this gives:
function [x, y, z] = mySphericToCarthesian(varargin)
% Put your function description here.
switch nargin
case 1
Xi = varargin{1};
% Do you computation with matrix Xi
% Define only x, which will be used as the only output
case 3
alpha = varargin{1};
beta = varargin{2};
rho = varargin{3};
% Do you computation with alpha, beta and rho
% Define x, y and z for the outputs
end

Matlab optimization: what types of objective functions are 'allowed' with fminsearch.m and Co.?

Examples for optimizations with functions like fmincon.m and fminsearchbnd.m usually minimize objective functions that are relatively simple. With simple I mean that the objective function only consists of some algebraic expression, e.g. the Rosenbrock formula.
In my problem, on the other hand, the objective function consists of several steps, including
computing an L2-norm misfit between an observed data point and a set of n training data points (n~5e4)
selecting those data points from the training data set that give the lowest misfit
then using the row indices of this selected subset to compute the final distance that I intend to minimize.
i.e. I perform operations that cannot be formulated as a single mathematical expression. Can I use such an objective function with tools like fminsearchbnd.m or fmincon.m at all? My results so far are not very promising...
There is an easy and obvious solution for that. You fminsearch() to find a minimum for some self-defined functions. In my example, it is fitting a polynomial, which of course is easy, but the trick is, that this could be anything. You can access the data if you make your objective function as a nested function, so they share the same variable scope.
You can start from the following code and fill in everything you want to do part by part and maybe ask followup questions, if any come up.
function main
verbose = 1; % some output
% optimize something, maybe a distorted polynomial
x = sort(rand(20,1));
p_original = [1.5, 3, 2, 1];
y = polyval(p_original,x) + 0.5*(rand(size(x))-0.5);
% optimize polynomial of order order. This is an example of how to pass
% a parameter to the fit function.
order = 3;
% obvious solution is this, but we want to do something else
p_polyfit = polyfit(x,y,order)
% we want to do it a bit more complex
pfit = optimize_something(x, y, order, verbose)
% what is happening?
figure
plot(x,polyval(p_original,x),'k-')
hold on
plot(x,y,'ko')
plot(x,polyval(p_polyfit,x),'rs-')
plot(x,fit_function(x,pfit),'gx-')
legend('original','noisy','polyfit','optimization')
end
function pfit = optimize_something(x,y, order, verbose)
% for polynomial of order order we need order+1 coefficients
p0 = ones(1,order+1); % initial guess: all coefficients are 1
if verbose
fprintf('optimize_something calling fminsearch(#objFun)\n');
end
% hand over only p0 to our objective function
pfit = fminsearch(#objFun, p0);
% ------------------------- NESTED objFUN --------------------------------%
function e = objFun(p)
% This function accepts only p as parameter and returns a value e, which
% will be minimized by some metric (maybe least squares).
% Since this function is nested, it can use also the predefined variables x, y (and also p0 and verbose).
% The magic is, we calculate a value yfitted out of x and p by a
% fit_function. This function can really be anything!
yfitted = fit_function(x, p);
e = sum((yfitted-y).^2);
% e = sum(abs(yfitted-y)); % another possibility
end
% ------------------------- NESTED objFUN --------------------------------%
if verbose
disp('pfit found')
end
end
function yfitted = fit_function(x, p)
% In our example we want to fit a polynomial, so we do so. We evaluate the
% polynomial p at x.
yfitted = polyval(p,x);
% But it could be anything, really.. each value in p could be something
% else, maybe the sum of an exponential function and a straight line
% yfitted = p(1)*exp(p(2)*x) + p(3)*x + p(4);
end
You can try to use CVX. It is an addon for Matlab that lets you describe your optimisation problem with normal Matlab code.
Alternatively, write down your objective function including any constraints. Your description is not clear to me, and it would help you too, if you would write this down in actual formulae.
I read your steps as this:
"Computing an L2-norm between an observed data point and a set of n training data points." It seems that there is a total of one (1) observed data points. Let's call the observed point x. Let's call the training data points y_i for i=1..n.
The L2-Norm is: |x-y_i|.
"Selecting those data points [multiple?] that give the lowest misfit". You haven't said how many data points you want, and how you'd combine multiple points to give a single L2-Norm. Let's assume you want exactly one such point (the closest to the observed data point x). Thus you get: argmin (over i) |x-y_i|. If you have multiple, you could greedily take the k closest points.
"Then using the row indices of this selected subset to compute the final distance that I intend to minimize." And what is the final distance that you intend to minimize?

Quantile regression with linprog in Matlab

I am trying to implement the quantile regression process with a simple setup in Matlab. This page contains a description of the quantile regression as a linear program, and displays the appropriate matrices and vectors. I've tried to implement it in Matlab, but I do not get the correct last element of the bhat vector. It should be around 1 but I get a very low value (<1e-10). Using another algorithm I have, I get a value of 1.0675. Where did I go wrong? I'm guessing A, b or f are wrong.
I have tried playing with optimset, but I don't think that is the problem. I think I've made a conversion mistake when going from math to code, I just can't see where.
% set seed
rng(1);
% set parameters
n=30;
tau=0.5;
% create regressor and regressand
x=rand(n,1);
y=x+rand(n,1)/10;
% number of regressors (1)
m=size(x,2);
% vektors and matrices for linprog
f=[tau*ones(n,1);(1-tau)*ones(n,1);zeros(m,1)];
A=[eye(n),-eye(n),x;
-eye(n),eye(n),-x;
-eye(n),zeros(n),zeros(n,m);
zeros(n),-eye(n),zeros(n,m)];
b=[y;
y
zeros(n,1);
zeros(n,1)];
% get solution bhat=[u,v,beta] and exitflag (1=succes)
[bhat,~,exflag]=linprog(f',A,b);
I solved my problem by using the formulation (in the pdf) above the one I tried to implement in the question. I've put it in a Matlab-function if you're interested in the code.
function [ bhat ] = qregressMatlab( y, x, tau )
% bhat are the estimates
% y is a vector of outcomes
% x is a matrix with columns of explanatory variables
% tau is a scalar for choosing the conditional quantile to be estimated
n=size(x,1);
m=size(x,2);
% vectors and matrices for linprog
f=[tau*ones(n,1);(1-tau)*ones(n,1);zeros(m,1)];
Aeq=[eye(n),-eye(n),x];
beq=y;
lb=[zeros(n,1);zeros(n,1);-inf*ones(m,1)];
ub=inf*ones(m+2*n,1);
% Solve the linear programme
[bhat,~,~]=linprog(f,[],[],Aeq,beq,lb,ub);
% Pick out betas from (u,v,beta)-vector.
bhat=bhat(end-m+1:end);
end

Strange error when trying to compute cotangent

I'm new to MATLAB, and I'm trying to make sense of some scripts I have. In one, I have an expression for computing short-circuit impedance (within the context of other expressions):
Z=tan(2*p*f*d/vp)
That's fine and dandy, but when I want to change from tangent to the negative cotangent (for an open-circuit) like this:
Z=-1/tan(2*p*f*d/vp)
It gives me an error at that line as follows:
?? Error using ==> mldivide
Matrix dimensions must agree
Now, AFAIK none of the subexpressions in computing Z are matrices. What makes it more confusing is that if I change 1/tan with cot then it works (independently of whether I add a - sign in front of it or not):
Z=-cot(2*p*f*d/vp)
Any ideas? I've done my googling on the mldivide error, but I just don't see how that applies to computing the cotangent as literally the inverse of the tangent.
Am I missing a MATLAB peculiarity here? Thanks.
-- EDIT --
I think I should have included the entire source code (originally for calculating input impedance for a short-circuit line, and attempted a chance from tan to -cot for an open-circuit line)
close all; % close all opened graphs
figure; % open new graph
% define distributed line parameters
L=209.410e-9; % line inductance in H/m
C=119.510e-12; % line capacitance in F/m
vp=1/sqrt(L*C); % phase velocity
Z0=sqrt(L/C); % characteristic line impedance
d=0.1; % line length
N=5000; % number of sampling points
f=1e9+3e9*(0:N)/N; % set frequency range
%Z=tan(2*pi*f*d/vp); % short circuit impedance
Z= -1/tan(2*pi*f*d/vp); % open circuit impedance
plot(f/1e9,abs(Z0*Z));
title('Input impedance of a short-circuit transmission line');
xlabel('Frequency {\itf}, GHz');
ylabel('Input impedance |Z|, {\Omega}');
axis([1 4 0 500]);
% print -deps 'fig2_28.eps' % if uncommented -> saves a copy of plot in EPS format
I guess one of p, f, or d is a matrix, so tan(2*p*f*d/vp) will be a matrix as well. 1/matrix won't work because that is defined to be the inverse of a matrix multiplication, where you have restrictions to the dimensions of your matrices.
Try
Z=-1./tan(2*p*f*d/vp)
This is the element-wise division. (I assume that's what you want.)
That code works fine so long as p, f, d and vp are all scalar. Therefore, one of your inputs must be non-scalar.
The / sign is matrix division (i.e. multiplication by the inverse from the right), which needs same-sized arrays. Normally, all works well with scalars, but sometimes, the interpreter will cough so that you have to use ./, i.e. element-wise division, instead.
>> p = 0.1;
>> f = 0.2;
>> d = 0.01;
>> vp =0.2;
>> Z=-1/tan(2*p*f*d/vp)
Z =
-499.9993
It would seem that you are passing a matrix as Matlab tells you.