How do I use matrix and compare to table and interpolate to closest values and generate new matrix? - matlab

I have a question regarding interpolation and comparing values from a matrix to another matrix and then generating a new matrix with interpolated values.
I have a matrix with timestamps, wind speeds, and direction, that looks like this:
Timestamp Wind speed Direction
13-Apr-2000 00:10:00 9.285 265.59
13-Apr-2000 00:20:00 7.044 261.32
13-Apr-2000 00:30:00 6.578 258.66
13-Apr-2000 00:40:00 7.476 261.43
13-Apr-2000 00:50:00 6.918 260.29
13-Apr-2000 01:00:00 6.832 253.48
13-Apr-2000 01:10:00 6.368 250.11
13-Apr-2000 01:20:00 5.279 260.44
13-Apr-2000 01:30:00 5.27 266.75
In my other matrix I have my turbulence (TI) dependent on speed (downwards) and direction (from left to right):
0 5 10 15 20 25
0 12.368 12.368 12.368 12.7585 13.149 13.149
1 12.368 12.368 12.368 12.7585 13.149 13.149
2 11.934 11.934 11.934 12.4135 12.893 12.893
3 11.726 11.726 11.726 11.917 12.108 12.108
4 11.391 11.391 11.391 11.065 10.739 10.739
5 11.32 11.32 11.32 11.0505 10.781 10.781
6 11.062 11.062 11.062 10.958 10.854 10.854
7 10.932 10.932 10.932 11.0905 11.249 11.249
8 11.244 11.244 11.244 11.294 11.344 11.344
9 12.037 12.037 12.037 11.757 11.477 11.477
10 11.934 11.934 11.934 11.8795 11.825 11.825
I want to write a function where my input is the matrix with my timestamp, wind speed, and direction. I then want the function to consider each wind speed and direction at each timestamp and then interpolate to the closest value of the turbulence in my turbulence matrix.
I then want the function to generate a new time series (matrix) with my new values for turbulence at the same timestamp as in the original time series for each considered wind speed and direction.
How can I do this?
I'm using MATLAB 2011b and I don't have SIMULINK.

I use fit function to interpolate the data it Tt matrix.
In the beginning it looks for the nearest lower values of both Speed and Direction. Then in uses data around the point we are looking for and fit to them surface defined in form z=ax+by+c. Finally it evaluate the function to point of interrest.
% Data we have
Tt=[nan 0 5 10 15 20 25;
0 12.368 12.368 12.368 12.7585 13.149 13.149;
1 12.368 12.368 12.368 12.7585 13.149 13.149;
2 11.934 11.934 11.934 12.4135 12.893 12.893;
3 11.726 11.726 11.726 11.917 12.108 12.108;
4 11.391 11.391 11.391 11.065 10.739 10.739;
5 11.32 11.32 11.32 11.0505 10.781 10.781;
6 11.062 11.062 11.062 10.958 10.854 10.854;
7 10.932 10.932 10.932 11.0905 11.249 11.249;
8 11.244 11.244 11.244 11.294 11.344 11.344;
9 12.037 12.037 12.037 11.757 11.477 11.477;
10 11.934 11.934 11.934 11.8795 11.825 11.825];
% Data we are looking for
Speed=4.5;
Direction=12.5;
[M,N]=size(Tt);
Sindex=find(Speed<Tt(:,1),1); % find the index matching Speed
Dindex=find(Direction<Tt(1,:),1); % find the index matching Direction
if Sindex<=M&&Dindex<=N
% Speed and Direction are defined in the Tt table
S=[Tt(Sindex,1)*[1;1];Tt(Sindex+1,1)*[1;1]];
D=[Tt(1,Dindex);Tt(1,Dindex+1);Tt(1,Dindex);Tt(1,Dindex+1)];
T=[Tt(Sindex,Dindex);Tt(Sindex,Dindex+1);Tt(Sindex+1,Dindex);Tt(Sindex+1,Dindex+1)];
% S,D,T are in form of [x1;x1;x2;x2],[y1;y2;y1;y2],[z11;z12;z21;z22] vectorized matrix.
Tfit=fit([S,D],T,'poly11'); % get the linear fit of data, type help fit for more info
Turbulence=feval(Tfit,[Speed,Direction]) %Here we have the wanted Turbulence value.
else
%What shall we do w... hen data are out of the matrix?
end
Edit according to comment
If the Tt matrix are in form [S,D,T], then
Tfit=fit(Tt(:,1:2),Tt(:,3),'linearinterp');
will interpolate whole matrix.
If you have several Tt matrices I'd recommend two-step approach. At first create your TtDataBase.mat by
save('TtDataBase.mat','Tfit1') % Run this in workspace for the first time only
save('TtDataBase.mat','Tfit2','-append') % Run this for the other Tfits
The first create/rewrite exiting .mat file while the second appends/rewrite existing new variable to the existing .mat file.
I'd recommend to append some description of each Tfit, say the range of validity etc.
In the second step You can use
load(`TtDataBase.mat`,'Tfit2') % load Tfit2 only
load(`TtDataBase.mat`) % load all variables in TtDataBase
If you provide the specification and located the proper Tfit You can use
load('TtDataBase.mat','description')
% decide which Tfit is good for the situation
% some code here
ProperFit='Tfit5' % Say the automated "logic" choose Tfit5 to be the best
load('TtDataBase.mat',ProperFit); % Tfit5 will appear in workspace
eval(['Tfit=' ProperFit]) % Tfit will appear in the workspace as a copy of Tfit5
Turbulence=feval(Tfit,[Speed,Direction]); % actual Tfit5 data will be used for interpolation.
If you obtain new Tt matrix and you have the description variable(s) then do not forget to load the old description(s) and append new ones to them before using save('TtDataBase.mat','description','description2','-append') because by that command you are rewriting existing files.

Related

linear combination of curves to match a single curve with integer constraints

I have a set of vectors (curves) which I would like to match to a single curve. The issue isnt only finding a linear combination of the set of curves which will most closely match the single curve (this can be done with least squares Ax = B). I need to be able to add constraints, for example limiting the number of curves used in the fitting to a particular number, or that the curves lie next to each other. These constraints would be found in mixed integer linear programming optimization.
I have started by using lsqlin which allows constraints and have been able to limit the variable to be > 0.0, but in terms of adding further constraints I am at a loss. Is there a way to add integer constraints to least squares, or alternatively is there a way to solve this with a MILP?
any help in the right direction much appreciated!
Edit: Based on the suggestion by ErwinKalvelagen I am attempting to use CPLEX and its quadtratic solvers, however until now I have not managed to get it working. I have created a minimal 'notworking' example and have uploaded the data here and code here below. The issue is that matlabs LS solver lsqlin is able to solve, however CPLEX cplexlsqnonneglin returns CPLEX Error 5002: %s is not convex for the same problem.
function [ ] = minWorkingLSexample( )
%MINWORKINGLSEXAMPLE for LS with matlab and CPLEX
%matlab is able to solve the least squares, CPLEX returns error:
% Error using cplexlsqnonneglin
% CPLEX Error 5002: %s is not convex.
%
%
% Error in Backscatter_Transform_excel2_readMut_LINPROG_CPLEX (line 203)
% cplexlsqnonneglin (C,d);
%
load('C_n_d_2.mat')
lb = zeros(size(C,2),1);
options = optimoptions('lsqlin','Algorithm','trust-region-reflective');
[fact2,resnorm,residual,exitflag,output] = ...
lsqlin(C,d,[],[],[],[],lb,[],[],options);
%% CPLEX
ctype = cellstr(repmat('C',1,size(C,2)));
options = cplexoptimset;
options.Display = 'on';
[fact3, resnorm, residual, exitflag, output] = ...
cplexlsqnonneglin (C,d);
end
I could reproduce the Cplex problem. Here is a workaround. Instead of solving the first model, use a model that is less nonlinear:
The second model solves fine with Cplex. The problem is somewhat of a tolerance/numeric issue. For the second model we have a much more well-behaved Q matrix (a diagonal). Essentially we moved some of the complexity from the objective into linear constraints.
You should now see something like:
Tried aggregator 1 time.
QP Presolve eliminated 1 rows and 1 columns.
Reduced QP has 401 rows, 443 columns, and 17201 nonzeros.
Reduced QP objective Q matrix has 401 nonzeros.
Presolve time = 0.02 sec. (1.21 ticks)
Parallel mode: using up to 8 threads for barrier.
Number of nonzeros in lower triangle of A*A' = 80200
Using Approximate Minimum Degree ordering
Total time for automatic ordering = 0.00 sec. (3.57 ticks)
Summary statistics for Cholesky factor:
Threads = 8
Rows in Factor = 401
Integer space required = 401
Total non-zeros in factor = 80601
Total FP ops to factor = 21574201
Itn Primal Obj Dual Obj Prim Inf Upper Inf Dual Inf
0 3.3391791e-01 -3.3391791e-01 9.70e+03 0.00e+00 4.20e+04
1 9.6533667e+02 -3.0509942e+03 1.21e-12 0.00e+00 1.71e-11
2 6.4361775e+01 -3.6729243e+02 3.08e-13 0.00e+00 1.71e-11
3 2.2399862e+01 -6.8231454e+01 1.14e-13 0.00e+00 3.75e-12
4 6.8012056e+00 -2.0011575e+01 2.45e-13 0.00e+00 1.04e-12
5 3.3548410e+00 -1.9547176e+00 1.18e-13 0.00e+00 3.55e-13
6 1.9866256e+00 6.0981384e-01 5.55e-13 0.00e+00 1.86e-13
7 1.4271894e+00 1.0119284e+00 2.82e-12 0.00e+00 1.15e-13
8 1.1434804e+00 1.1081026e+00 6.93e-12 0.00e+00 1.09e-13
9 1.1163905e+00 1.1149752e+00 5.89e-12 0.00e+00 1.14e-13
10 1.1153877e+00 1.1153509e+00 2.52e-11 0.00e+00 9.71e-14
11 1.1153611e+00 1.1153602e+00 2.10e-11 0.00e+00 8.69e-14
12 1.1153604e+00 1.1153604e+00 1.10e-11 0.00e+00 8.96e-14
Barrier time = 0.17 sec. (38.31 ticks)
Total time on 8 threads = 0.17 sec. (38.31 ticks)
QP status(1): optimal
Cplex Time: 0.17sec (det. 38.31 ticks)
Optimal solution found.
Objective : 1.115360
See here for some details.
Update: In Matlab this becomes:

Speedup constrained shuffling. GPU (Tesla K40m), CPU parallel computations in MATLAB

I have 100 lamps. They are blinking. I observe them during some time. For each lamp i calculate mean, std and autocorrelation of intervals between blinking.
Now I should resample observed data and keep permutations, where all parameters (mean, std, autocorrelation) are inside some range. Code which I have works good. But it takes to long time (week) for each round of experiment. I do it on computing server with 12 cores and 2 Tesla K40m GPUs (details are in the end).
My code:
close all
clear all
clc
% open parpool skip error if it was opened
try parpool(24); end
% Sample input. It is faked, just for demo.
% Number of "lamps" and number of "blinks" are similar to real.
NLamps = 10^2;
NBlinks = 2*10^2;
Events = cumsum([randg(9,NLamps,NBlinks)],2); % each row - different "lamp"
DurationOfExperiment=Events(:,end).*1.01;
%% MAIN
% Define parameters
nLags=2; % I need to keep autocorrelation with lags 1-2
alpha=[0.01,0.1]; % range of allowed relative deviation from observed
% parameters should be > 0 to avoid generating original
% sequence
nPermutations=10^2; % In original code 10^5
% Processing of experimental data
DurationOfExperiment=num2cell(DurationOfExperiment);
Events=num2cell(Events,2);
Intervals=cellfun(#(x) diff(x),Events,'UniformOutput',false);
observedParams=cellfun(#(x) fGetParameters(x,nLags),Intervals,'UniformOutput',false);
observedParams=cell2mat(observedParams);
% Constrained shuffling. EXPENSIVE PART!!!
while true
parfor iPermutation=1:nPermutations
% Shuffle intervals
shuffledIntervals=cellfun(#(x,y) fPermute(x,y),Intervals,DurationOfExperiment,'UniformOutput',false);
% get parameters of shuffled intervals
shuffledParameters=cellfun(#(x) fGetParameters(x,nLags),shuffledIntervals,'UniformOutput',false);
shuffledParameters=cell2mat(shuffledParameters);
% get relative deviation
delta=abs((shuffledParameters-observedParams)./observedParams);
% find shuffled Lamps, which are inside alpha range
MaximumDeviation=max(delta,[] ,2);
MinimumDeviation=min(delta,[] ,2);
LampID=find(and(MaximumDeviation<alpha(2),MinimumDeviation>alpha(1)));
% if shuffling of ANY lamp was succesful, save these Intervals
if ~isempty(LampID)
shuffledIntervals=shuffledIntervals(LampID);
shuffledParameters=shuffledParameters(LampID,:);
parsave( LampID,shuffledIntervals,shuffledParameters);
'DONE'
end
end
end
%% FUNCTIONS
function [ params ] = fGetParameters( intervals,nLags )
% Calculate [mean,std,autocorrelations with lags from 1 to nLags
R=nan(1,nLags);
for lag=1:nLags
R(lag) = corr(intervals(1:end-lag)',intervals((1+lag):end)','type','Spearman');
end
params = [mean(intervals),std(intervals),R];
end
%--------------------------------------------------------------------------
function [ Intervals ] = fPermute( Intervals,Duration )
% Create long shuffled time-series
Time=cumsum([0,datasample(Intervals,numel(Intervals)*3)]);
% Keep the same duration
Time(Time>Duration)=[];
% Calculate Intervals
Intervals=diff(Time);
end
%--------------------------------------------------------------------------
function parsave( LampID,Intervals,params)
save([num2str(randi(10^9)),'.mat'],'LampID','Intervals','params')
end
Server specs:
>>gpuDevice()
CUDADevice with properties:
Name: 'Tesla K40m'
Index: 1
ComputeCapability: '3.5'
SupportsDouble: 1
DriverVersion: 8
ToolkitVersion: 8
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.1979e+10
AvailableMemory: 1.1846e+10
MultiprocessorCount: 15
ClockRateKHz: 745000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 0
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
>> feature('numcores')
MATLAB detected: 12 physical cores.
MATLAB detected: 24 logical cores.
MATLAB was assigned: 24 logical cores by the OS.
MATLAB is using: 12 logical cores.
MATLAB is not using all logical cores because hyper-threading is enabled.
>> system('for /f "tokens=2 delims==" %A in (''wmic cpu get name /value'') do #(echo %A)')
Intel(R) Xeon(R) CPU E5-2630 v2 # 2.60GHz
Intel(R) Xeon(R) CPU E5-2630 v2 # 2.60GHz
>> memory
Maximum possible array: 496890 MB (5.210e+11 bytes) *
Memory available for all arrays: 496890 MB (5.210e+11 bytes) *
Memory used by MATLAB: 18534 MB (1.943e+10 bytes)
Physical Memory (RAM): 262109 MB (2.748e+11 bytes)
* Limited by System Memory (physical + swap file) available.
Question:
Is it possible to speedup my calculation? I think about CPU+GPU computing, but I could not understand how to do it (I have no experience with gpuArrays). Moreover, I am not sure it is a good idea. Sometimes some algorithm optimisation gives bigger profit, then parallel computing.
P.S.
Saving step is not the bottleneck- it happens once in 10-30 mins in best case.
GPU-based processing is only available on some functions and with the right cards (if I remember correctly).
For the GPU part of your question MATLAB has a list of available functions - that you can run on GPU - the most expensive part of your code is the function corr which unfortunately isn't on the list.
If the profiler isn't highlighting bottlenecks - something weird is going on... So I ran some tests on your code above:
nPermutations = 10^0 iteration takes ~0.13 seconds
nPermutations = 10^1 iteration takes ~1.3 seconds
nPermutations = 10^3 iteration takes ~130 seconds
nPermutations = 10^4 probably takes ~1300 seconds
nPermutations = 10^5 probably takes ~13000 seconds
Which is a lot less than a week...
Did I mention that I put a break out of your while statement - as I couldn't see in your code where you ever "break" out of the while loop - I hope for your sake that this isn't the reason that your function would run forever....
while true
parfor iPermutation=1:nPermutations
% Shuffle intervals
shuffledIntervals=cellfun(#(x,y) fPermute(x,y),Intervals,DurationOfExperiment,'UniformOutput',false);
% get parameters of shuffled intervals
shuffledParameters=cellfun(#(x) fGetParameters(x,nLags),shuffledIntervals,'UniformOutput',false);
shuffledParameters=cell2mat(shuffledParameters);
% get relative deviation
delta=abs((shuffledParameters-observedParams)./observedParams);
% find shuffled Lamps, which are inside alpha range
MaximumDeviation=max(delta,[] ,2);
MinimumDeviation=min(delta,[] ,2);
LampID=find(and(MaximumDeviation<alpha(2),MinimumDeviation>alpha(1)));
% if shuffling of ANY lamp was succesful, save these Intervals
if ~isempty(LampID)
shuffledIntervals=shuffledIntervals(LampID);
shuffledParameters=shuffledParameters(LampID,:);
parsave( LampID,shuffledIntervals,shuffledParameters);
'DONE'
end
end
break % You need to break out of the loop at some point
% otherwise it would run forever....
end

multiple training data for cascade-forward backpropagation network

I am training my neural network with data from 3 consecutive days and testing it with data from a 4th day. The values in this example are randomly chosen and have no relation with reality. I want the neural network to learn the current, depending on the temperature and the solar radiation.
%% initialize data for training
Temperature_Day1 = [25 26 27 26 25];
Temperature_Day2 = [25 24 24 23 24];
Temperature_Day3 = [21 20 22 21 20];
SolarRadiation_Day1 = [990 944 970 999 962];
SolarRadiation_Day2 = [993 947 973 996 967];
SolarRadiation_Day3 = [993 948 973 998 965];
Current_Day1 = [0.11 0.44 0.44 0.45 0.56];
Current_Day2 = [0.41 0.34 0.43 0.55 0.75];
Current_Day3 = [0.34 0.98 0.34 0.76 0.71];
Day1 = [Temperature_Day1; SolarRadiation_Day1]; % 2-by-5
Day2 = [Temperature_Day2; SolarRadiation_Day2]; % 2-by-5
Day3 = [Temperature_Day3; SolarRadiation_Day3]; % 2-by-5
%% training input and training target
Training_Input = [Day1; Day2; Day3]; % 6-by-5
Training_Target = [Current_Day1; Current_Day2; Current_Day3]; % 3-by-5
%% training the network
hiddenLayers= 2;
net = newcf(Training_Input, Training_Target, hiddenLayers);
y = sim(net, Training_Input);
net.trainParam.epochs = 100;
net = train(net, Training_Input, Training_Target);
%% initialize data for prediction
Temperature_Day4 = [45 23 22 11 24];
SolarRadiation_Day4 = [960 984 980 993 967];
Current_Day4 = [0.14 0.48 0.37 0.46 0.77];
Day4 = [Temperature_Day4; SolarRadiation_Day4]; % 2-by-5
Test_Input = [Day4; Day4; Day4]; % same dimension as Training_Input; subject to question
%% prediction
Predicted_Target = sim(net, Test_Input); % yields 3-by-5
My question is: How do I train it with the data of 3 days and then predict the target of the 4th day? Since training and testing inputs must have the same dimension, how do I test it for only one day? Here it is solved by just concatenating three identical data sets of the test input. However, this also yields 3 different data sets for the predicted target.
What is here the right way to do it?
BTW: I have seen this type of question many times, but the answers are never satisfying because they always suggest to change the dimensions of the test input without considering the nature of the problem (which is that only one data set is available for testing). So please don't mark this as a duplicate.
The features that you have for your network are Temperature and SolarRadiation, each taken at specific times during the day. The day on which these readings are taken are irrelevant (otherwise you wouldn't be able to predict the outputs for day 4 given data for days 1-3).
This means that we can simply pass each observation separately by concatenating the days horizontally (and similarly for the target data):
Training_Input = [Day1, Day2, Day3]; % 2-by-15
Training_Target = [Current_Day1, Current_Day2, Current_Day3]; % 1-by-15
The resulting network will give you one output (Current) per observation in the test set, so you don't need to duplicate:
Day4 = [Temperature_Day4; SolarRadiation_Day4]; % 2-by-5
Test_Input = [Day4]; % 2-by-5
PredictedTarget will now be 1-by-5 showing the predicted Current for each of the test observations.
You might consider adding a third feature as input to your net representing the time at which each observations was taken. Assuming that you have t timeslots each day at which observations are taken (thus, length(Temperature) == length(SolarRadiation) == t for all days) and observation s is taken at the same time every day, you can add a feature called TimeSlot:
TimeSlot_Day1 = 1:numel(Temperature_Day1);
TimeSlot_Day2 = 1:numel(Temperature_Day2);
TimeSlot_Day3 = 1:numel(Temperature_Day3)];
Day1 = [Temperature_Day1; SolarRadiation_Day1; TimeSlot_Day1]; % 3-by-5
Day2 = [Temperature_Day2; SolarRadiation_Day2; TimeSlot_Day2]; % 3-by-5
Day3 = [Temperature_Day3; SolarRadiation_Day3; TimeSlot_Day3]; % 3-by-5

Decide best 'k' in k-means algorithm in weka

I am using k-means algorithm for clustering but I am not sure how to decide best optimal value of k based on the results.
For ex, i have applied k-means on a dataset for k=10:
kMeans
======
Number of iterations: 16
Within cluster sum of squared errors: 38.47923197081721
Missing values globally replaced with mean/mode
Cluster centroids:
Cluster#
Attribute Full Data 0 1 2 3 4 5 6 7 8 9
(214) (16) (9) (13) (23) (46) (12) (11) (40) (15) (29)
==============================================================================================================================================================================================================================================================
RI 1.5184 1.5181 1.5175 1.5189 1.5178 1.5172 1.519 1.5255 1.5175 1.5222 1.5171
Na 13.4079 12.9988 14.6467 12.8277 13.2148 13.1896 13.63 12.6318 13.0518 13.9107 14.4421
Mg 2.6845 3.4894 1.3056 0.7738 3.4261 3.4987 3.4917 0.2145 3.4958 3.8273 0.5383
Al 1.4449 1.1844 1.3667 2.0338 1.3552 1.4898 1.3308 1.1891 1.2617 0.716 2.1228
Si 72.6509 72.785 73.2067 72.3662 72.6526 72.6989 72.07 72.0709 72.9532 71.7467 72.9659
K 0.4971 0.4794 0 1.47 0.527 0.59 0.4108 0.2345 0.547 0.1007 0.3252
Ca 8.957 8.8069 9.3567 10.1238 8.5648 8.3041 8.87 13.1291 8.5035 9.5887 8.4914
Ba 0.175 0.015 0 0.1877 0.023 0.003 0.0667 0.2864 0 0 1.04
Fe 0.057 0.2238 0 0.0608 0.2013 0.0104 0.0167 0.1109 0.011 0.0313 0.0134
Type build wind non-float build wind float tableware containers build wind non-float build wind non-float build wind float build wind non-float build wind float build wind float headlamps
There are various methods for deciding the optimal value for "k" in k-means algorithm Thumb-Rule, elbow method, silhouette method etc. In my work I used to follow the result obtained form the elbow method and got succeed with my results, I had done all the analysis in the R-Language.
Here is the link of the description for those methods link
Try to find the sub links of the given link, build a code for any one of the method & apply on your data.
I hope this will help you, if not I am sorry.
All the Best with your work.

Matlab forecasting with autoregressive exogenous modell

i have a file, which is the energy consumption of a house.
every 10 minute one value (watt):
10:00 123
10:10 125
10:20 0
...
It means each day have 144 value (Rows).
i want to forecast the energy of the next day with ARX an ARMAX programm. i did write ARX code in Matlab. but i can't forecast the next day. My code take the last 5 consumption and forecast the 6th one. How can i forecast the nex 144 value ( = the day after)
% ARX Process----------------------------
L=length(u_in)
u_in_ID=u_in;% Input data used for Identification
u_in_vfy=u_in;% Input data used for verification
y_out_ID=y_out;% Output data used for Identification
y_out_vfy=y_out;%Output data used for verification
m=5; %Parameter to be used to generate order of delay for Input, Output and Error
n=length(y_out_ID)-m;
I=eye(n,1)+1;
I(1)=I(1)-1;
A=I; % Initialize Matrix A
Y=y_out_ID((m+1):end); % Defining Y vector
length(Y)
na=1;
% Put output delay 1 to m-na in A matrix
for k=1:1:m-na
A=[A y_out_ID((m-k+1):(end-k))];
end
% Put "Current Input -- mth delayed Input" to Matrix A
for p=1:1:m
k=p-1;
A=[A u_in_ID((m-k+1):(end-k))];
end
A(:,1)=[]; % Delete 1st column of Matrix A, which was used to Initialize it
parsol=inv(A'*A)*A'*Y;
BB=A*parsol;
% Generate Identified Output vector based on previous
% outputs, current and previous Inputs and Parameters solved by Least
% square method
n=length(y_out_vfy)-m;
I=eye(n,1)+1;
I(1)=I(1)-1;
A=I;
for k=1:1:m-na
A=[A y_out_vfy((m-k+1):(end-k))];
end
for p=1:1:m
k=p-1;
A=[A u_in_vfy((m-k+1):(end-k))];
end
A(:,1)=[]; % Delete 1st column of Matrix A, which was used to Initialize it
A;
y_out_sysID=A*parsol;
can anyone help me?