Combining a LGR and a Kalman Filter into a single control - matlab

What I'm trying to do
I'm trying to create a LQG controller to control a given system by combining a Linear Quadratic Regulator (LQR) and a Kalman Filter.
Where I'm stuck
I have found both separately, but am unsure how I combine them in MATLAB. This is the system and LQR solution:
I am able to create both the Kalman Filter and the LQR seperately, but I can't figure out how to combine the LQR to take the Kalman Filter State Estimate as its input.
Akal = Afull;
Bkal = [B1, B2];
Ckal = Cfull;
Dkal = [0 0];
sys_kal = ss(Akal,Bkal,Ckal,Dkal);
[KEST,L,P] = kalman(sys_kal,E_d, E_n, 0)
[K,S,e] = lqr(Afull,B1,Q,r);
When I use the kalman function, here is what size(KEST) gives me:
State-space model with 11 outputs, 2 inputs, and 10 states.
I want my U to use the estimate given by the new SS system KEST. KEST provides an estimate for the output (y, dimension 1), and an estimate for all 10 states (X, dimension 10). I can write/draw out the closed loop control path that I am looking to create using the LQR and Kalman functions, but I am stuck at this point because I don't know how to implement it via MATLAB. I am unsure of the syntax as well.
I have searched for MATLAB examples but haven't found any that show me how to combine what I have found. I know that KEST is a state space model but I don't know how to use it or select a single output.
What I'm hoping to get help with
If I use bode(KEST), it gives me a BODE plot of all 11 outputs. I am not sure how to select just a single output of KEST.
I would like to have U = -K*X_est, but currently I just know the value of K. I don't know how to obtain an X_est from my KEST state space system.

What you are looking for is a command called lqgreg:
rlqg = lqgreg(kest,k)
Also make sure that kest outputs are the 10 states, and the y (output) is not included in the estimation.
To understand it better: LQR is a state-feedback, so the control is feeding back all of your states with an optimal k gain. The size of k is equal to the states of your system model.
Usual problem with LQR is that you rarely have all states measured. Here comes the kalman filter and helps to estimate all states from the measured outputs and known inputs. Note that kalman filter can be used for many other things, but here the only role of kalman filter is to create state estimates for state feedback, therefore it should estimate nothing else than the system states.
If you need to know you system output (for plotting or anything else) it is very easy to calculate from state estimates and the input (Cx + Du), but you can create another kalman filter just for the output estimation. This later solution is not acceptable when running in a microcontroller or other low capacity enviroment, since you are duplicate basically the same algorithm.

Related

Simple low/high filtering matlab

can someone help me with another matlab project:
Is it possible to create a simple low pass filter like in RC circuits? For instance if we create a sine wave like y=10*sin(2*pift).
Can i just filter the signal with a cutoff frequency to be able to see how the filtered signal looks like(for the amplitude decay)?
So when i enter some information for example f = cutoff and plot the filtered signal this must reduce the amplitude somewhere at 30% from the input signal ?
I've been looking for the functions like butter and cheby but they seem to do other kind of filtering like removing the noise from a signal. I just want some simple plots(input and output) which will show the principle of RC, RL low and high pass filters.
An RC circuit is a fist-order filter. To approximate a first order hardware filter, I generally use a IIR filter. A butterworth filter is usually my first choice for IIR, but for a first-order response, it doesn't really matter.
Here's an example with the cutoff frequency being the same as the signal frequency, so the filtered signal should be 3 dB down...
%first, make signal
fs = 1000; %your sample rate, Hz
dur_sec = 1; %what is the duration of your signal, seconds
t_sec = ([1:dur_sec*fs]-1)/fs; %here is a vctor of time
freq_Hz = 25; %what frequency do you want your sign wave
y = sin(2*pi*freq_Hz*t_sec); %make your sine wave
%Second, make your filter
N = 1; %first order
cutoff_Hz = 25; %should be 3dB down at the cutoff
[b,a]=butter(N,cutoff_Hz/(fs/2),'lowpass'); %this makes a lowpass filter
%Third, apply the filter
y_filt = filter(b,a,y);
%Last, plot the results
figure;
plot(t_sec,y,t_sec,y_filt);
xlabel('Time (sec)');
ylabel('Amplitude');
ylim([-1 1]);
legend('Raw','Filtered');
title(['1st-Order Filter with Cutoff at ' num2str(cutoff_Hz) ' Hz']);
As an alternative to using the built-in filter design functions such as butter, you could choose model the circuit itself. A simple first order RC (or RL) circuit would result a first order differential equation. For an RC circuit, you'd then integrate the equation through time, given your sine wave as stimulation. That would work fine, but could be more of a hassle, depending upon your background.
For a simple, first-order hardware filter that is properly buffered by an op-amp on either end of the RC, I think that you'd find that the result using the first order butter filter is going to be awfully close (the same?) as modeling the circuit. The butter filter is way easier to implement in software (because i just gave you the code above), so I'd go that route.
When you move to 2nd order hardware filters, however, that's where you have to be more careful. You have a few options:
1) Continue to model your 2nd order hardware using one of the built-in filter functions. A second order butter is trivial to implement (alter the N in the code above), but this might not model the specific hardware filter that you've created. You'll have to choose the right kind of IIR filter to match the architecture of your hardware filter.
2) If you haven't picked your architecture for your hardware filter, you could choose the architecture to follow one of the canonical filter types, just so that it is easy to model via butter, cheby1, or whatever.
3) You could go back to modeling the circuit with differential equations. This would let you model any filter circuit, whether it followed a canonical type or not. You could also put non-linear effects in here, if you wanted.
For the first order RC filter, though, I think that any of the built-in filter types will be a decent enough model for an RC filter. I'd suggest that you play with my example code above. I think that it will satisfy your need.
Chip

Filters performance analysis

I am working on some experimental data which, at some point, need to be time-integrated and then high-pass filtered (to remove low frequency disturbancies introduced by integration and unwanted DC component).
The aim of my work is not related to filtering, but still I would like to analyze more in detail the filters I am using to give some justification (for example to motivate why I chosed to use a 4th order filter instead of a higher/lower one).
This is the filter I am using:
delta_t = 1.53846e-04;
Fs = 1/delta_t;
cut_F = 8;
Wn = cut_F/(Fs/2);
ftype = 'high';
[b,a] = butter(4,Wn,ftype);
filtered_signal = filtfilt(b,a,signal);
I already had a look here: High-pass filtering in MATLAB to learn something about filters (I never had a course on signal processing) and I used
fvtool(b,a)
to see the impulse response, step response ecc. of the filter I have used.
The problem is that I do not know how to "read" these plots.
What do I have to look for?
How can I understand if a filter is good or not? (I do not have any specification about filter performances, I just know that the lowest frequency I can admit is 5 Hz)
What features of different filters are useful to be compared to motivate the choice?
I see you are starting your Uni DSP class on filters :)
First thing you need to remember is that Matlab can only simulate using finite values, so the results you see are technically all discrete. There are 4 things that will influence your filtering results(or tell you if your filter is good or bad) which you will learn about/have to consider while designing a Finite response filter:
1, the Type of the filter (i.e. Hamming, Butterworth (the one you are using), Blackman, Hanning .etc)
2, the number of filter Coefficients (which determines your filter resolution)
3, the sampling frequency of the original signal (ideally, if you have infinite sampling frequency, you can have perfect filters; not possible in Matlab due to reason above, but you can simulate its effect by setting it really high)
4, the cut-off frequency
You can play around with the 4 parameters so that your filter does what you want it to.
So here comes the theory:
There is a trade-off in terms of the width of your main lobe vs the spectrum leakage of your filter. The idea is that you have some signal with some frequencies, you want to filter out the unwanted (i.e. your DC noise) and keep the ones you want, but what if your desired signal frequency is so low that it is very close to the DC noise. If you have a badly designed filter, you will not be able to filter out the DC component. In order to design a good filter, you will need to find the optimal number for your filter coefficients, type of filter, even cut-off frequency to make sure your filter works as you wanted.
Here is a low-pass filter that I wrote back in the days, you can play around with filters a lot by filtering different kinds of signals and plotting the response.
N = 21; %number of filter coefficients
fc = 4000; %cut-off frequency
f_sampling = fs; %sampling freq
Fc = fc/f_sampling;
n = -(N-1)/2:(N-1)/2;
delta = [zeros(1,(N-1)/2) 1 zeros(1,(N-1)/2)];
h = delta - 2*Fc*sinc(2*n*Fc);
output = filter(h,1,yoursignal);
to plot the response, you want to plot your output in the frequency domain using DFT or FFT(in Matlab) and see how the signal has been distorted due to the leakage and etc.
NFFT=256; % FFT length
output=1/N*abs(fft(output,NFFT)).^2; % PSD estimate using FFT
this gives you what is known as a periodigram, when you plot, you might want to do the 10*log10 to it, so it looks nicer
Hope you do well in class.

Using Linear Prediction Over Time Series to Determine Next K Points

I have a time series of N data points of sunspots and would like to predict based on a subset of these points the remaining points in the series and then compare the correctness.
I'm just getting introduced to linear prediction using Matlab and so have decided that I would go the route of using the following code segment within a loop so that every point outside of the training set until the end of the given data has a prediction:
%x is the data, training set is some subset of x starting from beginning
%'unknown' is the number of points to extend the prediction over starting from the
%end of the training set (i.e. difference in length of training set and data vectors)
%x_pred is set to x initially
p = length(training_set);
coeffs = lpc(training_set, p);
for i=1:unknown
nextValue = -coeffs(2:end) * x_pred(end-unknown-1+i:-1:end-unknown-1+i-p+1)';
x_pred(end-unknown+i) = nextValue;
end
error = norm(x - x_pred)
I have three questions regarding this:
1) Does this appropriately do what I have described? I ask because my error seems rather large (>100) when predicting over only the last 20 points of a dataset that has hundreds of points.
2) Am I interpreting the second argument of lpc correctly? Namely, that it means the 'order' or rather number of points that you want to use in predicting the next point?
3) If this is there a more efficient, single line function in Matlab that I can call to replace the looping and just compute all necessary predictions for me given some subset of my overall data as a training set?
I tried looking through the lpc Matlab tutorial but it didn't seem to do the prediction as I have described my needs require. I have also been using How to use aryule() in Matlab to extend a number series? as a reference.
So after much deliberation and experimentation I have found the above approach to be correct and there does not appear to be any single Matlab function to do the above work. The large errors experienced are reasonable since I am using a linear prediction algorithm for a problem (i.e. sunspot prediction) that has inherent nonlinear behavior.
Hope this helps anyone else out there working on something similar.

Fast fourier transform for deasonalizing data in MATLAB

I'm very much a novice at signal processing techniques, but I am trying to apply the fast fourier transform to a daily time series to remove the seasonality present in the data. The example I am working with is from here:
http://www.mathworks.com/help/signal/ug/frequency-domain-linear-regression.html
While I understand how to implement the code as it is written in the example, I am having a hard time adapting it to my specific application. What I am trying to do is create a preprocessing function which deseasonalizes the training data using similar code to the above example. Then, using the same estimated coefficients from the in-sample data, deseasonalize the out-of-sample data to preserve its independence from the in-sample data. Basically, once the coefficients are estimated, I will normalize each new data point using the same coefficients. I suspect this is akin to estimating a linear trend, then removing it from the in-sample data, and then using the same linear model on unseen data to detrend it i the same manner.
Obviously, when I estimate the fourier coefficients, the vector I get out is equal to the length of the in-sample data. The out-of-sample data is comprised of much fewer observations, so directly applying them is impossible.
Is this sort of analysis possible using this technique or am I going down a dead end road? How should I approach that using the code in the example above?
What you want to do is certainly possible, you are on the right track, but you seem to misunderstand a few points in the example. First, it is shown in the example that the technique is the equivalent of linear regression in the time domain, exploiting the FFT to perform in the frequency domain an operation with the same effect. Second, the trend that is removed is not linear, it is equal to a sum of sinusoids, which is why FFT is used to identify particular frequency components in a relatively tidy way.
In your case it seems you are interested in the residuals. The initial approach is therefore to proceed as in the example as follows:
(1) Perform a rough "detrending" by removing the DC component (the mean of the time-domain data)
(2) FFT and inspect the data, choose frequency channels that contain most of the signal.
You can then use those channels to generate a trend in the time domain and subtract that from the original data to obtain the residuals. You need not proceed by using IFFT, however. Instead you can explicitly sum over the cosine and sine components. You do this in a way similar to the last step of the example, which explains how to find the amplitudes via time-domain regression, but substituting the amplitudes obtained from the FFT.
The following code shows how you can do this:
tim = (time - time0)/timestep; % <-- acquisition times for your *new* data, normalized
NFpick = [2 7 13]; % <-- channels you picked to build the detrending baseline
% Compute the trend
mu = mean(ts);
tsdft = fft(ts-mu);
Nchannels = length(ts); % <-- size of time domain data
Mpick = 2*length(NFpick);
X(:,1:2:Mpick) = cos(2*pi*(NFpick-1)'/Nchannels*tim)';
X(:,2:2:Mpick) = sin(-2*pi*(NFpick-1)'/Nchannels*tim)';
% Generate beta vector "bet" containing scaled amplitudes from the spectrum
bet = 2*tsdft(NFpick)/Nchannels;
bet = reshape([real(bet) imag(bet)].', numel(bet)*2,1)
trend = X*bet + mu;
To remove the trend just do
detrended = dat - trend;
where dat is your new data acquired at times tim. Make sure you define the time origin consistently. In addition this assumes the data is real (not complex), as in the example linked to. You'll have to examine the code to make it work for complex data.

Time of Arrival estimation of a signal in Matlab

I want to estimate the time of arrival of GPR echo signals using Music algorithm in matlab, I am using the duality property of Fourier transform.
I am first applying FFT on the obtained signal and then passing these as parameters to pmusic function, i am still getting the result in frequency domain.?
Short Answer: You're using the wrong function here.
As far as I can tell Matlab's pmusic function returns the pseudospectrum of an input signal.
If you click on the pseudospectrum link, you'll see that the pseudospectrum of a signal lives in the frequency domain. In particular, look at the plot:
(from Matlab's documentation: Plotting Pseudospectrum Data)
Notice that the result is in the frequency domain.
Assuming that by GPR you mean Ground Penetrating Radar, then try radar or sonar echo detection approach to estimate the two way transit time.
This can be done and the theory has been published in several papers. See, for example, here:
STAR Channel Estimation in DS-CDMA Systems
That paper describes spatiotemporal estimation (i.e. estimation of both time and direction of arrival), but you can ignore the spatial part and just do temporal estimation if you have a single-antenna receiver.
You probably won't want to use Matlab's pmusic function directly. It's always quicker and easier to write these sorts of functions for yourself, so you know what is actually going on. In the case of MUSIC:
% Get noise subspace (where M is number of signals)
[E, D] = eig(Rxx);
[lambda, idx] = sort(diag(D), 'descend');
E = E(:, idx);
En = E(:,M+1:end);
% [Construct matrix S, whose columns are the vectors to search]
% Calculate MUSIC null spectrum and convert to dB
Z = 10*log10(sum(abs(S'*En).^2, 2));
You can use the Phased array system toolbox of MATLAB if you want to estimate the DOA using different algorithms using a single command. Such as for Root MUSIC it is phased.RootMUSICEstimator phased.ESPRITEstimator.
However as Harry mentioned its easy to write your own function, once you define the signal subspace and receive vector, you can directly apply it in the MUSIC function to find its peaks.
This is another good reference.
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1143830