Manual implementation of Matlab spectogram function - matlab

I am trying to implement my own function that gives the same results as Matlab spectogram function.
So far I have accomplished function like this:
function out = manulaSpectogram(x, win, noverlap, nfft)
x = x(:);
n = length(x);
wlen = length(win);
nUnique = ceil((1+nfft)/2); % number of uniqure points
L = fix((n-noverlap)/(wlen-noverlap)); % number of signal frames
out = zeros(L, nUnique);
index = 1:wlen;
for i = 0:L-1
xw = win.*x(index);
X = fft(xw, nfft);
out(i+1, :) = X(1:nUnique);
index = index + (wlen - noverlap);
end
end
In my tests it works perfectly and gives the same results like spectogram function when parameter nfft is greater or equal to length of window.
% first test (nnft = window length):
A = [1,2,3,4,5,6];
window = 6;
overlap = 2;
nfft = 6;
s = spectrogram(A, hamming(window), overlap, nfft)'
s2 = manulaSpectogram(A, hamming(window), overlap, nfft)
% results:
s =
9.7300 + 0.0000i -5.2936 + 0.9205i 0.7279 - 0.3737i -0.1186 + 0.0000i
s2 =
9.7300 + 0.0000i -5.2936 - 0.9205i 0.7279 + 0.3737i -0.1186 + 0.0000i
% second test (nfft > window length):
A = [1,2,3,4,5,6];
window = 3;
overlap = 2;
nfft = 6;
s = spectrogram(A, hamming(window), overlap, nfft)'
s2 = manulaSpectogram(A, hamming(window), overlap, nfft)
% results:
s =
2.3200 + 0.0000i 0.9600 + 1.9399i -1.0400 + 1.5242i -1.6800 + 0.0000i
3.4800 + 0.0000i 1.5000 + 2.8752i -1.5000 + 2.3209i -2.5200 + 0.0000i
4.6400 + 0.0000i 2.0400 + 3.8105i -1.9600 + 3.1177i -3.3600 + 0.0000i
5.8000 + 0.0000i 2.5800 + 4.7458i -2.4200 + 3.9144i -4.2000 + 0.0000i
s2 =
2.3200 + 0.0000i 0.9600 - 1.9399i -1.0400 - 1.5242i -1.6800 + 0.0000i
3.4800 + 0.0000i 1.5000 - 2.8752i -1.5000 - 2.3209i -2.5200 + 0.0000i
4.6400 + 0.0000i 2.0400 - 3.8105i -1.9600 - 3.1177i -3.3600 + 0.0000i
5.8000 + 0.0000i 2.5800 - 4.7458i -2.4200 - 3.9144i -4.2000 + 0.0000i
In the case when length of window is less than nfft than the results are different.
% third test (nfft < window length):
A = [1,2,3,4,5,6];
window = 6;
overlap = 2;
nfft = 3;
s = spectrogram(A, hamming(window), overlap, nfft)'
s2 = manulaSpectogram(A, hamming(window), overlap, nfft)
% results:
s =
9.7300 + 0.0000i 0.7279 - 0.3737i
s2 =
3.6121 + 0.0000i -1.6861 + 1.6807i
So how can I improve my function to recieve the same results even in the case when nnft is less than window length? How Matlab's spectogram calculates this case?
I am trying to implement my own function because a spectogram function is a part of a large algorithm which I need to implement from Matlab to C# language so I would like to know what spectogram "black box" does..

I noticed that when window size is greater than nfft scalar number, the data has to be transformed somehow. Finally I found an inner Matlab function that probably is called in the original spectogram Matlab function. It is named datawrap and wraps input data modulo nfft.
So in my function I had to transform data segment (in the same way how datawrap function does it) before calling fft.
Improved function:
function out = manulaSpectogram(x, win, noverlap, nfft)
x = x(:);
n = length(x);
wlen = length(win);
nUnique = ceil((1+nfft)/2); % number of uniqure points
L = fix((n-noverlap)/(wlen-noverlap)); % number of signal frames
out = zeros(L, nUnique);
index = 1:wlen;
for i = 0:L-1
xw = win.*x(index);
% added transformation
if length(xw) > nfft
xw = sum(buffer(xw, nfft), 2);
end
% end of added transformation
X = fft(xw, nfft);
out(i+1, :) = X(1:nUnique);
index = index + (wlen - noverlap);
end
end
I believe it works properly because it gives the same results as Matlab spectogram function.

With the window you are calculating the ends of the block to zero. This needs as mandatory, that the vector lengths of the window and the FFT-Block are the same. Matlab allows with the default parameters unequal values: nsc is not equal to nff (see Matlab docu spectrogram with default). An error message would be necessary if different block length for the window and nff occur. Matlab spectrogram with default parameter for my opinion is wrong. Compare with LabView, Dasylab, Hewlett-Packard and http://www.schmid-werren.ch/hanspeter/publications/2012fftnoise.pdf

Related

Matlab: Extract values that I plot but which has not been stored

I have a mathematical function E which I want to minimize. I get from solving this 16 possible solutions x1, x2, ..., x16, only two of which that actually minimize the function (located at a minimum). Using a for loop, I can then plug all of these 16 solutions into the original function, and select the solutions I need by applying some criteria via if statements (plotting E vs E(x) if x is real and positive, if first derivative of E is below a threshold, and if the second derivative of E is positive).
That way I only plot the solutions I'm interested in. However, I would now like to extract the relevant x that I plot. Here's a sample MATLAB code that plots the way I just described. I want to extract the thetas that I actually end up plotting. How to do that?
format long
theta_s = 0.77944100;
sigma = 0.50659500;
Delta = 0.52687700;
%% Defining the coefficients of the 4th degree polynomial
alpha = cos(2*theta_s);
beta = sin(2*theta_s);
gamma = 2*Delta^2/sigma^2;
a = -gamma^2 - beta^2*Delta^2 - alpha^2*Delta^2 + 2*alpha*Delta*gamma;
b = 2*alpha*gamma - 2*Delta*gamma - 2*alpha^2*Delta + 2*alpha*Delta^2 -...
2*beta^2*Delta;
c = 2*gamma^2 - 2*alpha*Delta*gamma - 2*gamma - alpha^2 + 4*alpha*Delta +...
beta^2*Delta^2 - beta^2 - Delta^2;
d = -2*alpha*gamma + 2*Delta*gamma + 2*alpha + 2*beta^2*Delta - 2*Delta;
e = beta^2 - gamma^2 + 2*gamma - 1;
%% Solve the polynomial numerically.
P = [a b c d e];
R = roots(P);
%% Solve r = cos(2x) for x: x = n*pi +- 1/2 * acos(r). Using n = 0 and 1.
theta = [1/2.*acos(R) -1/2.*acos(R) pi+1/2.*acos(R) pi-1/2.*acos(R)];
figure;
hold on;
x = 0:1/1000:2*pi;
y_1 = sigma*cos(x - theta_s) + sqrt(1 + Delta*cos(2*x));
y_2 = sigma*cos(x - theta_s) - sqrt(1 + Delta*cos(2*x));
plot(x,y_1,'black');
plot(x,y_2,'black');
grid on;
%% Plot theta if real, if positive, if 1st derivative is ~zero, and if 2nd derivative is positive
for j=1:numel(theta);
A = isreal(theta(j));
x_j = theta(j);
y_j = sigma*cos(x_j - theta_s) + sqrt(1 + Delta*cos(2*x_j));
FirstDer = sigma* sin(theta(j) - theta_s) + Delta*sin(2*theta(j))/...
sqrt(1 + Delta*cos(2*theta(j)));
SecDer = -sigma*cos(theta(j)-theta_s) - 2*Delta*cos(2*theta(j))/...
(1 + Delta*cos(2*theta(j)))^(1/2) - Delta^2 * (sin(2*theta(j)))^2/...
(1 + Delta*cos(2*theta(j)))^(3/2);
if A == 1 && x_j>=0 && FirstDer < 1E-7 && SecDer > 0
plot(x_j,y_j,['o','blue'])
end
end
After you finish all plotting, get the axes handle:
ax = gca;
then write:
X = get(ax.Children,{'XData'});
And X will be cell array of all the x-axis values from all lines in the graph. One cell for each line.
For the code above:
X =
[1.961054062875753]
[4.514533853417446]
[1x6284 double]
[1x6284 double]
(First, the code all worked. Thanks for the effort there.)
There are options here. A are couple below
Record the values as you generate them
Within the "success" if statement, simply record the values. See edits to your code below.
This would always be the preferred option for me, it just seems much more efficient.
xyResults = zeros(0,2); %%% INITIALIZE HERE
for j=1:numel(theta);
A = isreal(theta(j));
x_j = theta(j);
y_j = sigma*cos(x_j - theta_s) + sqrt(1 + Delta*cos(2*x_j));
FirstDer = sigma* sin(theta(j) - theta_s) + Delta*sin(2*theta(j))/...
sqrt(1 + Delta*cos(2*theta(j)));
SecDer = -sigma*cos(theta(j)-theta_s) - 2*Delta*cos(2*theta(j))/...
(1 + Delta*cos(2*theta(j)))^(1/2) - Delta^2 * (sin(2*theta(j)))^2/...
(1 + Delta*cos(2*theta(j)))^(3/2);
if A == 1 && x_j>=0 && FirstDer < 1E-7 && SecDer > 0
xyResults(end+1,:) = [x_j y_j]; %%%% RECORD HERE
plot(x_j,y_j,['o','blue'])
end
end
Get the result from the graphics objects
You can get the data you want from the actual graphics objects. This would be the option if there was just no way to capture the data as it was generated.
%First find the objects witht the data you want
% (Ideally you could record handles to the lines as you generated
% them above. But then you could also simply record the answer, so
% let's assume that direct record is not possible.)
% (BTW, 'findobj' is an underused, powerful function.)
h = findobj(0,'Marker','o','Color','b','type','line')
%Then get the `xdata` filed from each
capturedXdata = get(h,'XData');
capturedXdata =
2×1 cell array
[1.96105406287575]
[4.51453385341745]
%Then get the `ydata` filed from each
capturedYdata = get(h,'YData');
capturedYdata =
2×1 cell array
[1.96105406287575]
[4.51453385341745]

Matlab: Nonlinear equation solver

How do I solve these sets of equations and can matlab find a solution? I'm solving for x1,x2,x3,x4,c1,c2,c3,c4.
syms c1 c2 c3 c4 x1 x2 x3 x4;
eqn1 = c1 + c2 + c3 + c4 == 2;
eqn2 = c1*x1 + c2*x2 + c3*x3 + c4*x4 == 0;
eqn3 = c1*x1^2 + c2*x2^2 + c3*x3^2 + c4*x4^2 == 2/3;
eqn4 = c1*x1^3 + c2*x2^3 + c3*x3^3 + c4*x4^3 == 0;
eqn5 = c1*x1^4 + c2*x2^4 + c3*x3^4 + c4*x4^4 == 2/5;
eqn6 = c1*x1^5 + c2*x2^5 + c3*x3^5 + c4*x4^5 == 0;
eqn7 = c1*x1^6 + c2*x2^6 + c3*x3^6 + c4*x4^6 == 2/7;
eqn8 = c1*x1^7 + c2*x2^7 + c3*x3^7 + c4*x4^7 == 0;
From what I understand, matlab has fsolve, solve, and linsolve, but I'm uncertain how to use them.
You have a system of non-linear equations, so you can use fsolve to find a solution.
First of all you need to create a function, say fcn, of a variable x, where x is a vector with your initial point. The function defines an output vector, depending on the current vector x.
You have eight variables, so your vector x will consist of eight elements. Let's rename your variables in this way:
%x1 x(1) %c1 x(5)
%x2 x(2) %c2 x(6)
%x3 x(3) %c3 x(7)
%x4 x(4) %c4 x(8)
Your function will look like this:
function F = fcn(x)
F=[x(5) + x(6) + x(7) + x(8) - 2 ;
x(5)*x(1) + x(6)*x(2) + x(7)*x(3) + x(8)*x(4) ;
x(5)*x(1)^2 + x(6)*x(2)^2 + x(7)*x(3)^2 + x(8)*x(4)^2 - 2/3 ;
x(5)*x(1)^3 + x(6)*x(2)^3 + x(7)*x(3)^3 + x(8)*x(4)^3 ;
x(5)*x(1)^4 + x(6)*x(2)^4 + x(7)*x(3)^4 + x(8)*x(4)^4 - 2/5 ;
x(5)*x(1)^5 + x(6)*x(2)^5 + x(7)*x(3)^5 + x(8)*x(4)^5 ;
x(5)*x(1)^6 + x(6)*x(2)^6 + x(7)*x(3)^6 + x(8)*x(4)^6 - 2/7 ;
x(5)*x(1)^7 + x(6)*x(2)^7 + x(7)*x(3)^7 + x(8)*x(4)^7
];
end
You can evaluate your function with some initial value of x:
x0 = [1; 1; 1; 1; 1; 1; 1; 1];
F0 = fcn(x0);
Using x0 as initial point your function returns:
F0 =
2.0000
4.0000
3.3333
4.0000
3.6000
4.0000
3.7143
4.0000
Now you can start fsolve which will try to find some vector x, such as your function returns all zeros:
[x,fval]=fsolve(#fcn, x0);
You will get something like this:
x =
0.7224
0.7224
-0.1100
-0.7589
0.3599
0.3599
0.6794
0.5768
fval =
-0.0240
0.0075
0.0493
0.0183
-0.0126
-0.0036
-0.0733
-0.0097
As you can see, the function values are really close to zeros, but you probably noticed that the optimization algorithm was stopped because of the limited count of the function evaluation steps stored in options.MaxFunEvals (by default 800). Another possible reason is the limited number of iterations stored in MaxIter (by default 400).
Redefine these value using the parameter options:
options = optimset('MaxFunEvals',2000, 'MaxIter', 1000);
[x,fval]=fsolve(#fcn, x0, options);
Now your output is much better:
x =
0.7963
0.7963
-0.0049
-0.7987
0.2619
0.2619
0.9592
0.5165
fval =
-0.0005
-0.0000
-0.0050
0.0014
0.0208
-0.0001
-0.0181
-0.0007
Just play with different parameter values, in order to achieve a tolerable precision level for your problem.

Matlab: Nonlinear equation Optimization

This question is related to the post below:
Matlab: Nonlinear equation solver
With 8 variables x0-x8, I got great results. However, when I increase to solving 10 variables, the results aren't so good. Even if my "guess" is close to the actual value and change the max iteration to 100000, the results are still poor. Is there anything else I can do?
Here is the code:
function F = fcn(x)
F=[x(6) + x(7) + x(8) + x(9) + x(10)-2 ;
x(6)*x(1) + x(7)*x(2) + x(8)*x(3) + x(9)*x(4) + x(10)*x(5) ;
x(6)*x(1)^2 + x(7)*x(2)^2 + x(8)*x(3)^2 + x(9)*x(4)^2 + x(10)*x(5)-2/3 ;
x(6)*x(1)^3 + x(7)*x(2)^3 + x(8)*x(3)^3 + x(9)*x(4)^3 + x(10)*x(5) ;
x(6)*x(1)^4 + x(7)*x(2)^4 + x(8)*x(3)^4 + x(9)*x(4)^4 + x(10)*x(5)-2/5 ;
x(6)*x(1)^5 + x(7)*x(2)^5 + x(8)*x(3)^5 + x(9)*x(4)^5 + x(10)*x(5) ;
x(6)*x(1)^6 + x(7)*x(2)^6 + x(8)*x(3)^6 + x(9)*x(4)^6 + x(10)*x(5)-2/7 ;
x(6)*x(1)^7 + x(7)*x(2)^7 + x(8)*x(3)^7 + x(9)*x(4)^7 + x(10)*x(5) ;
x(6)*x(1)^8 + x(7)*x(2)^8 + x(8)*x(3)^8 + x(9)*x(4)^8 + x(10)*x(5)-2/9 ;
x(6)*x(1)^9 + x(7)*x(2)^9 + x(8)*x(3)^9 + x(9)*x(4)^9 + x(10)*x(5)
];
end
clc
clear all;
format long
x0 = [0.90; 0.53; 0; -0.53; -0.90; 0.23; 0.47; 0.56; 0.47; 0.23]; %Guess
F0 = fcn(x0);
[x,fval]=fsolve(#fcn, x0) %solve without optimization
options = optimset('MaxFunEvals',100000, 'MaxIter', 100000); %optimization criteria
[x,fval]=fsolve(#fcn, x0, options) %solve with optimization
Here are the actual values I'm trying to get:
x1 = 0.906179
x2 = 0.538469
x3 = 0.000000
x4 = -0.53846
x5 = -0.906179
x6 = 0.236926
x7 = 0.478628
x8 = 0.568888
x9 = 0.478628
x10 = 0.236926
The result of such optimization functions like fsolve depends on the initial point very much. A non-linear function like yours can have a lot of local minima and your option is to randomly dice the initial point and hope it will lead the optimization to a better minimum than before.
You can do like this:
clear;
options = optimset('MaxFunEvals',2000, 'MaxIter', 1000, 'Display', 'off');
n = 200; %how many times calculate f with different initial points
z_min = 10000; %the current minimum Euclidian distance between fval and zeros
for i=1:n
x0 = rand(10, 1);
[x,fval]=fsolve(#fcn, x0, options);
z = norm(fval);
if (z < z_min)
z_min = z;
x_best = x;
f_best = fval;
display(['i = ', num2str(i), '; z_min = ', num2str(z_min)]);
display(['x = ', num2str(x_best')]);
display(['f = ', num2str(f_best')]);
fprintf('\n')
end
end
Change the maximum number of the optimization loops and look at the z value. It shows how close your function is to a zero vector.
The best solution I've got so far:
x_best =
0.9062
-0.9062
-0.5385
0.5385
0.0000
0.2369
0.2369
0.4786
0.4786
0.5689
f_best =
1.0e-08 * %these are very small numbers :)
0
0.9722
0.9170
0.8740
0.8416
0.8183
0.8025
0.7929
0.7883
0.7878
For this solution z_min is 2.5382e-08.

Computing CCA through three approaches

I have recently studied the concepts of CCA and wanted to implement it in MATLAB. However there is an existing matlab command canoncorr present. I wanted to write my own code. I have studied it extensively and found three approaches :
1: Hardoon : The approach uses lagrange multipliers to decompose the problem into an generalised eigenvalue problem. The code can be found here : cca_hardoon
For sanity sake I am also giving the code here : The data has to be centered previously.
function [Wx, Wy, r] = cca(X,Y)
% CCA calculate canonical correlations
%
% [Wx Wy r] = cca(X,Y) where Wx and Wy contains the canonical correlation
% vectors as columns and r is a vector with corresponding canonical
% correlations.
%
% Update 31/01/05 added bug handling.
if (nargin ~= 2)
disp('Inocorrect number of inputs');
help cca;
Wx = 0; Wy = 0; r = 0;
return;
end
% calculating the covariance matrices
z = [X; Y];
C = cov(z.');
sx = size(X,1);
sy = size(Y,1);
Cxx = C(1:sx, 1:sx) + 10^(-8)*eye(sx);
Cxy = C(1:sx, sx+1:sx+sy);
Cyx = Cxy';
Cyy = C(sx+1:sx+sy,sx+1:sx+sy) + 10^(-8)*eye(sy);
%calculating the Wx cca matrix
Rx = chol(Cxx);
invRx = inv(Rx);
Z = invRx'*Cxy*(Cyy\Cyx)*invRx;
Z = 0.5*(Z' + Z); % making sure that Z is a symmetric matrix
[Wx,r] = eig(Z); % basis in h (X)
r = sqrt(real(r)); % as the original r we get is lamda^2
Wx = invRx * Wx; % actual Wx values
% calculating Wy
Wy = (Cyy\Cyx) * Wx;
% by dividing it by lamda
Wy = Wy./repmat(diag(r)',sy,1);
2. MATLAB approach Please note the centering of data is done within the code itself.
3. CCA by Normal SVD only : This approach does not require the qr decomposition and utilizes the svd decomposition only. I have referred top this article here : cca by svd. Please refer to the text articles below which are taken from the referred article.
I have tried to code this program myself but unsuccessfully.
function [A,B,r,U,V] = cca_by_svd(x,y)
% computing the means
N = size(x,1); mu_x = mean(x,1); mu_y = mean(y,1);
% substracting the means
x = x - repmat(mu_x,N,1); y = y - repmat(mu_y,N,1);
x = x.'; y = y.';
% computing the covariance matrices
Cxx = (1/N)*x*(x.'); Cyy = (1/N)*y*(y.'); Cxy = (1/N)*x*(y.');
%dimension
m = min(rank(x),rank(y));
%m = min(size(x,1),size(y,1));
% computing the quare root inverse of the matrix
[V,D]=eig(Cxx); d = diag(D);
% Making all the eigen values positive
d = (d+abs(d))/2; d2 = 1./sqrt(d); d2(d==0)=0; Cxx_iv=V*diag(d2)*inv(V);
% computing the quare root inverse of the matrix
[V,D]=eig(Cyy); d = diag(D);
% Making all the eigen values positive
d = (d+abs(d))/2; d2 = 1./sqrt(d); d2(d==0)=0; Cyy_iv=V*diag(d2)*inv(V);
Omega = Cxx_iv*Cxy*Cyy_iv;
[C,Sigma,D] = svd(Omega);
A = Cxx_iv*C; A = A(:,1:m);
B = Cyy_iv*D.'; B = B(:,1:m);
A = real(A); B = real(B);
U = A.'*x; V = B.'*y;
r = Sigma(1:m,1:m);
I am running this code snippet:
clc;clear all;close all;
load carbig;
X = [Displacement Horsepower Weight Acceleration MPG];
nans = sum(isnan(X),2) > 0;
x = X(~nans,1:3);
y = X(~nans,4:5);
[A1, B1, r1, U1, V1] = canoncorr(x, y);
[A2, B2, r2, U2, V2] = cca_by_svd(x, y);
[A3, B3, r3] = cca(x.',y.',1);
The projection vector is coming out to be this :
>> A1
A1 =
0.0025 0.0048
0.0202 0.0409
-0.0000 -0.0027
>> A2
A2 =
0.0025 0.0048
0.0202 0.0410
-0.0000 -0.0027
>> A3
A3 =
-0.0302 -0.0050 -0.0022
0.0385 -0.0420 -0.0176
0.0020 0.0027 -0.0001
>> B1
B1 =
-0.1666 -0.3637
-0.0916 0.1078
>> B2
B2 =
-0.1668 -0.3642
-0.0917 0.1079
>> B3
B3 =
0.0000 + 0.0000i 0.3460 + 0.0000i 0.1336 + 0.0000i
0.0000 + 0.0000i -0.0967 + 0.0000i 0.0989 + 0.0000i
Question: Can someone please tell me where I am going wrong. The three approaches that I have referred all solve the same problem and ideally their solutions should converge. I admit my code 'cca_by_svd' may be wrong but hardoon's code and matlab's output should be same. Please point out to me where I am going wrong. edit I have rechecked and corrected my code. Now for this dataset the method 2 and 3 converge.
There's a few things that cca(X,Y) doesn't do that canoncorr does:
One is normalizing the data. If you add X = normc(X')' (also for Y) to your cca(X,Y) function, the output r will match that of canoncorr. If you look into canoncorr's code, you'll see that it starts by QR decomposition of X and Y.
Another difference is that eig sorts the eigenvalues in ascending order, so cca(X,Y) should flip the output of eig(Z).
NOTE: Despite correcting these differences, I wasn't able to fully recover Wx and Wy to match the outputs of canoncorr. Ideally, Wx'*Wx should look exactly alike between cca and canoncorr.

Sort complex matrix and vector together in specific order

How to sort vector e and matrix v together, that we have to respect: each eigenvalue coresponds to eigenvector in matrix v, like in the picture.
v =
0.9978 + 0.0022i 0.9978 - 0.0022i 0.9179 - 0.0199i 0.9179 + 0.0199i
-0.4665 + 0.0050i -0.4665 - 0.0050i 0.9805 - 0.0195i 0.9805 + 0.0195i
-0.0003 - 0.0025i -0.0003 + 0.0025i -0.0008 - 0.0162i -0.0008 + 0.0162i
0.0001 + 0.0012i 0.0001 - 0.0012i -0.0008 - 0.0173i -0.0008 + 0.0173i
So we can say that for example eigenvalue e(1,1) corespond to eigenvector v(:,1). In the picture are vector e_sort and matrix v_sort in specific order, that I need it.
The rule is for vector e:
First must be:
-a+b*i
and then:
-a-b*i
We can say, that:
0 < b_1 < b_2 < ... < b_n
Thanks.
Answer is also very similar :)
negIm = imag(e) < 0;
[e1,ie1] = sort(e(~negIm));
[e2,ie2] = sort(e(negIm));
newe = cat(1,e1,e2);
v1 = v(:,~negIm); v2 = v(:,negIm);
newv = cat(2,v1(:,ie1),v2(:,ie2));