Replacing three integrators with one - matlab

I need to calculate a linear growth model (in simulink) with continuous-time signal, described as:
x’ = ax
with at least three different real parameters “a”.
I've managed to do it using three integrators as you can see in the image below:
I've been told that there is a way to do it using only one integrator, but I can't figure it out.

You can give your gain blocks a vector, not just a scalar.
You can use a gain of [1 0.8 1.2] for a single gain block (with the multiplication mode set to Element-wise) instead of having three separate gain blocks set to 1, 0.8, and 1.2.

Related

Optimization problem - multivariable linear array, fitting data to data

I have a multivariable linear optimization problem that I could use some guidance with on finding an optimal function/code method (Matlab). My problem is as as follows:
I have a set of observed data, I'll call this d(i), which is a 5000x1 vector (# of rows may change).
I have 10 - 100 sets of simulated data, the number of sets is a number I decide on. Each of these sets is also a 5000x1 vector (again, # of rows may change). I'll call these c1(i), c2(i), etc.
I would like to fit the simulated data sets to the observed data set with this equation:
sf1*c1(i) + sf2*c2(i) + sf3*c3(i) sf4*c4(i) + ... = d(i) + error
In this equation, I would like to solve for all of the scale factors (sf) (non-negative constants) and the error. I am assuming I need to set initial values for all the scale factors for this problem to work. I have looked into things like lssqnonneg, but I am unclear on whether that function can solve or optimize for this many variables per equation.
See above - I have also manually input the values of some scale factors and I can get a pretty good fit to the data by hand, but this is impractical for large quantities of simulated data sets.
did you try looking at https://www.mathworks.com/help/stats/linear-regression.html?s_tid=CRUX_lftnav ?
Instead of using c1,c2,...c100 as different vectors better concatenate them into an array 100x5000, say A=[c1;c2;...;c100] this will be needed to make life easier.
Then look for example at ridge regression
Ans= ridge(d,A,k)
where k is the regularization parameter that can be found by cross-validation:
[U,s,V]=svd( A,"econ");
k=gcv(U,diag(s),d,'tsvd');
see the function gcv here https://www.mathworks.com/matlabcentral/fileexchange/52-regtools

Principal component analysis in matlab?

I have a training set with the size of (size(X_Training)=122 x 125937).
122 is the number of features
and 125937 is the sample size.
From my little understanding, PCA is useful when you want to reduce the dimension of the features. Meaning, I should reduce 122 to a smaller number.
But when I use in matlab:
X_new = pca(X_Training)
I get a matrix of size 125973x121, I am really confused, because this not only changes the features but also the sample size? This is a big problem for me, because I still have the target vector Y_Training that I want to use for my neural network.
Any help? Did I badly interpret the results? I only want to reduce the number of features.
Firstly, the documentation of the PCA function is useful: https://www.mathworks.com/help/stats/pca.html. It mentions that the rows are the samples while the columns are the features. This means you need to transpose your matrix first.
Secondly, you need to specify the number of dimensions to reduce to a priori. The PCA function does not do that for you automatically. Therefore, in addition to extracting the principal coefficients for each component, you also need to extract the scores as well. Once you have this, you simply subset into the scores and perform the reprojection into the reduced space.
In other words:
n_components = 10; % Change to however you see fit.
[coeff, score] = pca(X_training.');
X_reduce = score(:, 1:n_components);
X_reduce will be the dimensionality reduced feature set with the total number of columns being the total number of reduced features. Also notice that the number of training examples does not change as we expect. If you want to make sure that the number of features are along the rows instead of the columns after we reduce the number of features, transpose this output matrix as well before you proceed.
Finally, if you want to automatically determine the number of features to reduce to, one method to do so is to calculate the variance explained of each feature, then accumulate the values from the first feature up to the point where we exceed some threshold. Usually 95% is used.
Therefore, you need to provide additional output variables to capture these:
[coeff, score, latent, tsquared, explained, mu] = pca(X_training.');
I'll let you go through the documentation to understand the other variables, but the one you're looking at is the explained variable. What you should do is find the point where the total variance explained exceeds 95%:
[~,n_components] = max(cumsum(explained) >= 95);
Finally, if you want to perform a reconstruction and see how well the reconstruction into the original feature space performs from the reduced feature, you need to perform a reprojection into the original space:
X_reconstruct = bsxfun(#plus, score(:, 1:n_components) * coeff(:, 1:n_components).', mu);
mu are the means of each feature as a row vector. Therefore you need add this vector across all examples, so broadcasting is required and that's why bsxfun is used. If you're using MATLAB R2018b, this is now implicitly done when you use the addition operation.
X_reconstruct = score(:, 1:n_components) * coeff(:, 1:n_components).' + mu;

How to efficiently compute WEIGHTED moving average

I need to compute a weighted moving average withous loops and withoud storing infromation. The weight could be linear, so that the old sample is weighted less than the new one.
For example, using a 20 samples window, my weights vector would be:
[1 2 3 4 5 ... 20]
I'm using the following formula to compute the moving mean:
newMean = currMean + (newSample - currMean)/WindowSize
now I need to "inject" weight.
What I can know:
1. which sample I'm considering (14th....26th....), I can count.
2. of course, I can know currMean
What I can know but I don't want to do:
1. storing all the samples (in my case they are 1200 x 1980 x 3 matrix, I simply can't store them).
I'm currently using Matlab, but I really do not need the code, just the concept, if it exists.
Thank you.
Look into techniques in digital signal processing. You are describing a FIR filter, which can be implemented as a convolution, or as a memory efficient circuit. Basically you can rewrite it as a recursive equation that keeps only the filter-length past filtered intermediate state variables. MATLAB does this in filter function (you can chain the internal state to continue filtering). See documentation of filter and I also recommend reading a DSP textbook.

Using large input values with Auto Encoders

I have created an Auto Encoder Neural Network in MATLAB. I have quite large inputs at the first layer which I have to reconstruct through the network's output layer. I cannot use the large inputs as it is,so I convert it to between [0, 1] using sigmf function of MATLAB. It gives me a values of 1.000000 for all the large values. I have tried using setting the format but it does not help.
Is there a workaround to using large values with my auto encoder?
The process of convert your inputs to the range [0,1] is called normalization, however, as you noticed, the sigmf function is not adequate for this task. This link maybe is useful to you.
Suposse that your inputs are given by a matrix of N rows and M columns, where each row represent an input pattern and each column is a feature. If your first column is:
vec =
-0.1941
-2.1384
-0.8396
1.3546
-1.0722
Then you can convert it to the range [0,1] using:
%# get max and min
maxVec = max(vec);
minVec = min(vec);
%# normalize to -1...1
vecNormalized = ((vec-minVec)./(maxVec-minVec))
vecNormalized =
0.5566
0
0.3718
1.0000
0.3052
As #Dan indicates in the comments, another option is to standarize the data. The goal of this process is to scale the inputs to have mean 0 and a variance of 1. In this case, you need to substract the mean value of the column and divide by the standard deviation:
meanVec = mean(vec);
stdVec = std(vec);
vecStandarized = (vec-meanVec)./ stdVec
vecStandarized =
0.2981
-1.2121
-0.2032
1.5011
-0.3839
Before I give you my answer, let's think a bit about the rationale behind an auto-encoder (AE):
The purpose of auto-encoder is to learn, in an unsupervised manner, something about the underlying structure of the input data. How does AE achieves this goal? If it manages to reconstruct the input signal from its output signal (that is usually of lower dimension) it means that it did not lost information and it effectively managed to learn a more compact representation.
In most examples, it is assumed, for simplicity, that both input signal and output signal ranges in [0..1]. Therefore, the same non-linearity (sigmf) is applied both for obtaining the output signal and for reconstructing back the inputs from the outputs.
Something like
output = sigmf( W*input + b ); % compute output signal
reconstruct = sigmf( W'*output + b_prime ); % notice the different constant b_prime
Then the AE learning stage tries to minimize the training error || output - reconstruct ||.
However, who said the reconstruction non-linearity must be identical to the one used for computing the output?
In your case, the assumption that inputs ranges in [0..1] does not hold. Therefore, it seems that you need to use a different non-linearity for the reconstruction. You should pick one that agrees with the actual range of you inputs.
If, for example, your input ranges in (0..inf) you may consider using exp or ().^2 as the reconstruction non-linearity. You may use polynomials of various degrees, log or whatever function you think may fit the spread of your input data.
Disclaimer: I never actually encountered such a case and have not seen this type of solution in literature. However, I believe it makes sense and at least worth trying.

Best way to compare two signals in Matlab

I have a signal I made in matlab that I want to compare to another signal (call them y and z). What I am looking for is a way to assign a value or percentage of how similar two signals are.
I was trying to use corrcoef, but I get very poor values (corrcoef(y,z) = -0.1141), yet when I look at the FFT of the two plots superimposed on each other, I would have visually said that they are very similar. Taking a look at the corrcoef of the FFT of the magnitude of the two signals looks a lot more promising: corrcoef(abs(fft(y)),abs(fft(z))) = 0.9955, but I am not sure if that is the best way to go about it since the two signals in their pure form appear to not be correlated.
Does anyone have a recommendation of how to compare two signals in Matlab as described?
Thanks!
The question is impossible to answer without a clearer definition of what you mean by "similar".
If by "similar" you mean "correlated frequency responses", then, well, you're one step ahead of the game!
In general, defining the proper metric is highly application specific; you need to answer why you want to know how similar these two signals are to know how to measure how similar they are. Will they be input to the same system? Do they need to be detected by the same algorithm?
In the meantime, your idea to use the freq-domain correlation is not bad. But you might also consider
http://en.wikipedia.org/wiki/Dynamic_time_warping
Or the likelihood of the time-series under various statistical models:
http://en.wikipedia.org/wiki/Hidden_Markov_model
http://en.wikipedia.org/wiki/Autoregressive_model
http://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model
Or any number of other models...
I should add: In general, the correlation coefficient between two time-series is a very poor metric of the time-series' similarity, except under very specific circumstances (e.g., no shifts in phase)
Pete is right that you need to define a notion of similarity before progressing further. You might find normalized maximum cross-correlation magnitude to be useful notion of similarity for your circumstances, however:
norm_max_xcorr_mag = #(x,y)(max(abs(xcorr(x,y)))/(norm(x,2)*norm(y,2)));
x = randn(1, 200); y = randn(1, 200); % two random signals
norm_max_xcorr_mag(x,y)
ans = 0.1636
y = [zeros(1, 30), 3.*x]; % y is delayed, multiplied version of x
norm_max_xcorr_mag(x,y)
ans = 1
This notion of similarity is similar to rote correlation of the two sequences but is invariant to time delay.