clear
n=45; // widht
m=23; // length
// total 990 blocks m*n
a=-2; b=1; // x-limits
c=2; d=4; // y-limits
f=#(x,y) 4.0*x.^3.*y+0.7.*x.^2.*y+2.5.*x+0.2.*y; //function
x=linspace(a,b,n);
y=linspace(c,d,m);
h1=(b-a)/n
h2=(d-c)/m
dA=h1*h2
[X,Y]=meshgrid(x,y); //did a meshgrid cause q wouldnt accept different index bounds without meshgriding.
q=sum((sum(dA*f(X,Y))))
Ive been using a formula for double riemanns at this link.
https://activecalculus.org/multi/S-11-1-Double-Integrals-Rectangles.html
these are the answers
1.I=81.3000.
2.I-left=-87.4287 //-84.5523 my result
3.I-Right=-75.1072
I can't see what im doing wrong. I need input from somebody.
I would debug the integration scheme with a dummy constant function
f = #(x,y) 1.0*ones(size(x))
The result should be the exact total area (b-a)*(d-c) = 6, but your code gives 6.415.
The problem is that you are integrating outside the domain with those limits. You should stop the domain discretization one step before in each dimension:
h1 = (b-a)/n
h2 = (d-c)/m
x = linspace(a, b - h1, n);
y = linspace(c, d - h2, m);
This will give you the expected area for the dummy function:
q = 6.0000
and for the real function, evaluating at the top-left corner, you get:
q = -87.482
You didn't do anything wrong with your code. The difference comes from the resolution of x and y used in you code, since they are not sufficiently high.
For example, when you have n = 5000 and m = 5000
q = sum((sum(dA*f(X,Y)))); % your Riemann sum approach
l = integral2(f,a,b,c,d); % using built-in function for double integral to verify the value of `q`
you will see that the results are very close now
q = -81.329
l = -81.300
Related
I want to find the maximum value using the second derivative of the the expression when x is between 0 and 1. In other words I am taking the derivative of cox(x^2) twice to get the second derivative resulting in - 2*sin(x^2) - 4*x^2*cos(x^2), then I want to evaluate this second derivative at x = 0 to x = 1, and display the maximum value of the populated values.
I have:
syms x
f = cos(x^2);
secondD = diff(diff(f));
for i = 0:1
y = max(secondD(i))
end
Can someone help?
You can do it easily by subs and double:
syms x
f = cos(x^2);
secondD = diff(diff(f));
% instead of the for loop
epsilon = 0.01;
specified_range = 0:epsilon:1;
[max_val, max_ind] = max(double(subs(secondD, specified_range)));
Please note that it is a numerical approach to find the maximum and the returned answer is not completely correct all the time. However, by increasing the epsilon, you can expect a better result in general (again in some cases it is not completely correct).
I have a cubic expression here
I am trying to determine and plot δ𝛿 in the expression for P values of 0.0 to 5000. I'm really struggling to get the expression for δ in terms of the pressure P.
clear all;
close all;
t = 0.335*1e-9;
r = 62*1e-6;
delta = 1.2*1e+9;
E = 1e+12;
v = 0.17;
P = 0:100:5000
P = (4*delta*t)*w/r^2 + (2*E*t)*w^3/((1-v)*r^4);
I would appreciate if anyone could provide pointers.
I suggest two simple methods.
You evaluate P as a function of delta then you plot(P,delta). This is quick and dirty but if all you need is a plot it will do. The inconvenience is that you may to do some guess-and-trial to find the correct interval of P values, but you can also take a large enough value of delta_max and then restrict the x-axis limit of the plot.
Your function is a simple cubic, which you can solve analytically (see here if you are lost) to invert P(delta) into delta(P).
What you want is the functional inverse of your expression, i.e., δ𝛿 as a function of P. Since it's a cubic polynomial, you can expect up to three solutions (roots) for a give value of P. However, I'm guessing that you're only interested in real-valued solutions and nonnegative values of P. In that case there's just one real root for each value of P.
Given the values of your parameters, it makes most sense to solve this numerically using fzero. Using the parameter names in your code (different from equations):
t = 0.335*1e-9;
r = 62*1e-6;
delta = 1.2*1e9;
E = 1e12;
v = 0.17;
f = #(w,p)2*E*t*w.^3/((1-v)*r^4)+4*delta*t*w/r^2-p;
P = 0:100:5000;
w0 = [0 1]; % Bounded initial guess, valid up to very large values of P
w_sol = zeros(length(P),1);
for i = 1:length(P)
w_sol(i) = fzero(#(w)f(w,P(i)),w0); % Find solution for each P
end
figure;
plot(P,w_sol);
You could also solve this using symbolic math:
syms w p
t = 0.335*sym(1e-9);
r = 62*sym(1e-6);
delta = 1.2*sym(1e9);
E = sym(1e12);
v = sym(0.17);
w_sol = solve(p==2*E*t*w^3/((1-v)*r^4)+4*delta*t*w/r^2,w);
P = 0:100:5000;
w_sol = double(subs(w_sol(1),p,P)); % Plug in P values and convert to floating point
figure;
plot(P,w_sol);
Because of your numeric parameter values, solve returns an answer in terms of three RootOf objects, the first of which is the real one you want.
I have a state space system with matrices A,B,C and D.
I can either create a state space system, sys1 = ss(A,B,C,D), of it or compute the transfer function matrix, sys2 = C*inv(z*I - A)*B + D
However when I draw the bode plot of both systems, they are different while they should be the same.
What is going wrong here? Does anyone have a clue? I know btw that the bodeplot generated by sys1 is correct.
The system can be downloaded here: https://dl.dropboxusercontent.com/u/20782274/system.mat
clear all;
close all;
clc;
Ts = 0.01;
z = tf('z',Ts);
% Discrete system
A = [0 1 0; 0 0 1; 0.41 -1.21 1.8];
B = [0; 0; 0.01];
C = [7 -73 170];
D = 1;
% Set as state space
sys1 = ss(A,B,C,D,Ts);
% Compute transfer function
sys2 = C*inv(z*eye(3) - A)*B + D;
% Compute the actual transfer function
[num,den] = ss2tf(A,B,C,D);
sys3 = tf(num,den,Ts);
% Show bode
bode(sys1,'b',sys2,'r--',sys3,'g--');
Edit: I made a small mistake, the transfer function matrix is sys2 = C*inv(z*I - A)*B + D, instead of sys2 = C*inv(z*I - A)*B - D which I did wrote done before. The problem still holds.
Edit 2: I have noticted that when I compute the denominator, it is correct.
syms z;
collect(det(z*eye(3) - A),z)
Your assumption that sys2 = C*inv(z*I- A)*B + D is incorrect. The correct equivalent to your state-space system (A,B,C,D) is sys2 = C*inv(s*I- A)*B + D. If you want to express it in terms of z, you'll need to invert the relationship z = exp(s*T). sys1 is the correct representation of your state-space system. What I would suggest for sys2 is to do as follows:
sys1 = ss(mjlsCE.A,mjlsCE.B,mjlsCE.C,mjlsCE.D,Ts);
sys1_c = d2c(sys1);
s = tf('s');
sys2_c = sys1_c.C*inv(s*eye(length(sys1_c.A)) - sys1_c.A)*sys1_c.B + sys1_c.D;
sys2_d = c2d(sys2_c,Ts);
That should give you the correct result.
Due to inacurracy of the inverse function extra unobservable poles and zeros are added to the system. For this reason you need to compute the minimal realization of your transfer function matrix.
Meaning
% Compute transfer function
sys2 = minreal(C*inv(z*eye(3) - A)*B + D);
What you are noticing is actually a numerical instability regarding pole-zero pair cancellations.
If you run the following code:
A = [0, 1, 0; 0, 0, 1; 0.41, -1.21, 1.8] ;
B = [0; 0; 0.01] ;
C = [7, -73, 170] ;
D = 1 ;
sys_ss = ss(A, B, C, D) ;
sys_tf_simp = tf(sys_ss) ;
s = tf('s') ;
sys_tf_full = tf(C*inv(s*eye(3) - A)*B + D) ;
zero(sys_tf_simp)
zero(sys_tf_full)
pole(sys_tf_simp)
pole(sys_tf_full)
you will see that the transfer function formulated by matrices directly has a lot more poles and zeros than the one formulated by MatLab's tf function. You will also notice that every single pair of these "extra" poles and zeros are equal- meaning that they cancel with each other if you were to simply the rational expression. MatLab's tf presents the simplified form, with equal pole-zero pairs cancelled out. This is algebraically equivalent to the unsimplified form, but not numerically.
When you call bode on the unsimplified transfer function, MatLab begins its numerical plotting routine with the pole-zero pairs not cancelled algebraically. If the computer was perfect, the result would be the same as in the simplified case. However, numerical error when evaluating the numerator and denominators effectively leaves some of the pole-zero pairs "uncancelled" and as many of these poles are in the far right side of the s plane, they drastically influence the output behavior.
Check out this link for info on this same problem but from the perspective of design: http://ctms.engin.umich.edu/CTMS/index.php?aux=Extras_PZ
In your original code, you can think of the output drawn in green as what the naive designer wanted to see when he cancelled all his unstable poles with zeros, but the output drawn in red is what he actually got because in practice, finite-precision and real-world tolerances prevent the poles and zeros from cancelling perfectly.
Why is an unobservable / uncontrollable pole? I think this issue comes only because the inverse of a transfer function matrix is inaccurate in Matlab.
Note:
A is 3x3 and the minimal realization has also order 3.
What you did is the inverse of a transfer function matrix, not a symbolic or numeric matrix.
# Discrete system
Ts = 0.01;
A = [0 1 0; 0 0 1; 0.41 -1.21 1.8];
B = [0; 0; 0.01];
C = [7 -73 170];
D = 1;
z = tf('z', Ts)) # z is a discrete tf
A1 = z*eye(3) - A # a tf matrix with a direct feedthrough matrix A
# inverse it, multiply with C and B from left and right, and plus D
G = D + C*inv(A1)*B
G is now a scalar (SISO) transfer function.
Without "minreal", G has order 9 (funny, I don't know how Matlab computes it, perhaps the "Adj(.)/det(.)" method). Matlab cannot cancel the common factors in the numerator and the denominator, because z is of class 'tf' rather than a symbolic variable.
Do you agree or do I have misunderstanding?
So I am trying to go through a for loop that will increment .1 every time and will do this until the another variable h is less than or equal to zero. Then I am suppose to graph this h variable along another variable x. The code that I wrote looks like this:
O = 20;
v = 200;
g = 32.2;
for t = 0:.1:12
% Calculate the height
h(t) = (v)*(t)*(sin(O))-(1/2)*(g)*(t^2);
% Calculate the horizontal location
x(t) = (v)*(t)*cos(O);
if t > 0 && h <= 0
break
end
end
The Error that I keep getting when running this code says "Attempted to access h(0); index must be a positive integer or logical." I don't understand what exactly is going on in order for this to happen. So my question is why is this happening and is there a way I can solve it, Thank you in advance.
You're using t as your loop variable as well as your indexing variable. This doesn't work, because you'll try to access h(0), h(0.1), h(0.2), etc, which doesn't make sense. As the error says, you can only access variables using integers. You could replace your code with the following:
t = 0:0.1:12;
for i = 1:length(t)
% use t(i) instead of t now
end
I will also point out that you don't need to use a for loop to do this. MATLAB is optimised for acting on matrices (and vectors), and will in general run faster on vectorised functions rather than for loops. For instance, your equation for h could be replaced with the following:
O = 20;
v = 200;
g = 32.2;
t = 0:0.1:12;
h = v * t * sin(O) - 0.5 * g * t.^2;
The only difference is that you have to use the element-wise square (.^2) rather than the normal square (^2). This means that MATLAB will square each element of the vector t, rather than multiplying the vector t by itself.
In short:
As the error says, t needs to be an integer or logical.
But your t is t=0:0.1:12, therefore a decimal value.
O = 20;
v = 200;
g = 32.2;
for t = 0:.1:12
% Calculate the height
idx_t = 1:numel(t);
h(idx_t) = (v)*(t)*(sin(O))-(1/2)*(g)*(t^2);
% Calculate the horizontal location
x(idx_t) = (v)*(t)*cos(O);
if t > 0 && h <= 0
break
end
end
Look this question's answer for more options: Subscript indices must either be real positive integers or logical error
I'm trying to calculate how the errors depend on the step, h, for the trapezoidal rule. The errors should get smaller with a smaller value of h, but for me this doesn't happen. This is my code:
Iref is a reference value calculated and verified with Simpson's method and the MATLAB function quad, respectively
for h = 0.01:0.1:1
x = a:h:b;
v = y(x);
Itrap = (sum(v)-v(1)/2-v(end)/2)*h;
Error = abs(Itrap-Iref)
end
I think there's something wrong with the way I'm using h, because the trapezoidal rule works for known integrals. I would be really happy if someone could help me with this, because I can't understand why the errors are "jumping around" the way the do.
I wonder if maybe part of the problem is that not all intervals - for each step size h - have the same a and b just because of the way that x is constructed. Try the following with the additional fprintf statement:
for h = 0.01:0.1:1
x = a:h:b;
fprintf('a=%f b=%f\n',x(1),x(end));
v = y(x);
Itrap = (sum(v)-v(1)/2-v(end)/2)*h;
Error = abs(Itrap-Iref);
end
Depending upon your a and b (I chose a=0 and b=5) all the a values were identical (as expected) but the b varied from 4.55 to 5.0.
I think that you always want to keep the interval [a,b] the same for each step size that you choose in order to get a better comparison between each iteration. So rather than iterating over the step size, you could instead iterate over the n, the number of equally spaced sub-intervals within [a,b].
Rather than
for h = 0.01:0.1:1
x = a:h:b;
you could do something more like
% iterate over each value of n, chosen so that the step size
% is similar to what you had before
for n = [501 46 24 17 13 10 9 8 7 6]
% create an equally spaced vector of n numbers between a and b
x = linspace(a,b,n);
% get the step delta
h = x(2)-x(1);
v = y(x);
Itrap = (sum(v)-v(1)/2-v(end)/2)*h;
Error = abs(Itrap-Iref);
fprintf('a=%f b=%f len=%d h=%f Error=%f\n',x(1),x(end),length(x),h,Error);
end
When you evaluate the above code, you will notice that a and b are consistent for each iteration, h is roughly what you chose before, and the Error does increase as the step size increases.
Try the above and see what happens!