I can't determine the polynomial interpolation with the minumum grade - matlab

I have to restrict it with the folowings:
P(-1) = f(-1), P(0)=f(0), P(1)=f(1), P'(1)=f'(1)

Let the polynomial be
ax³ + bx² + cx + d
By the given equations,
- a + b - c + d = f(-1)
d = f(0)
a + b + c + d = f(1)
3a +2b + c = f'(1)
You should be able to solve.

Related

How to make the response of the solve function symbolic?

I am solving a fourth order equation in matlab using the solve function.My script looks like this:
syms m M I L Bp Bc g x
m = 0.127
M = 1.206
I = 0.001
L = 0.178
Bc = 5.4
Bp = 0.002
g = 9.8
eqn = ((m + M)*(I + m*L^2) - m^2*L^2)*x^4 + ((m + M)*Bp + (I + m*L^2)*Bc)*x^3 + ((m + M)*m*g*L + Bc*Bp)*x^2 + m*g*L*Bc*x == 0
S = solve(eqn, x)
In the answer, I should get 4 roots, but instead I get such strange expressions:
S =
0
root(z^3 + (34351166180215288*z^2)/7131724328013535 + (352922208800606144*z)/7131724328013535 + 1379250971773894912/7131724328013535, z, 1)
root(z^3 + (34351166180215288*z^2)/7131724328013535 + (352922208800606144*z)/7131724328013535 + 1379250971773894912/7131724328013535, z, 2)
root(z^3 + (34351166180215288*z^2)/7131724328013535 + (352922208800606144*z)/7131724328013535 + 1379250971773894912/7131724328013535, z, 3)
The first root, which is 0, is displayed clearly. Is it possible to make the other three roots appear as numbers as well? I looked for something about this in the documentation for the solve function, but did not find it.

"Solve" command with a vector in matlab

I have a problem with the equation shown below. I want to enter a vector in t2 and find the roots of the equation from different values in t2.
t2=[10:10:100]
syms x
p = x^3 + 3*x - t2;
R = solve(p,x)
R1 = vpa(R)
Easy! Don't use syms and use the general formula:
t2 = [10:10:100];
%p = x^3 + 3*x - t2;
a = 1;
b = 0;
c = 3;
d = -t2;
D0 = b*b - 3*a*c;
D1 = 2*b^3 - 9*a*b*c + 27*a^2*d;
C = ((D1+sqrt(D1.^2 - 4*D0.^3))/2).^(1/3);
C1 = C*1;
C2 = C*(-1-sqrt(3)*1i)/2;
C3 = C*(-1+sqrt(3)*1i)/2;
f = -1/(3*a);
x1 = f*(b + C1 + D0./C1);
x2 = f*(b + C2 + D0./C2);
x3 = f*(b + C3 + D0./C3);
Since b = 0, you can simplify this a bit:
% ... polynomial is the same
D0 = -3*a*c;
D1 = 27*a^2*d;
% ... the different C's are the same
f = -1/(3*a);
x1 = f*(C1 + D0./C1);
x2 = f*(C2 + D0./C2);
x3 = f*(C3 + D0./C3);
Trial>> syms x p
Trial>> EQUS = p == x^3 + 3*x - t2
It is unknown that you want to solve an equation or a system.
Suppose that you want to solve a system.
Trial>> solx = solve(Eqns,x)
But, I do not think you can find roots.
You can solve one equation.
Trial>> solx = solve(Eqns(3),x)
As far as I know, Maple can do this batter.
In general, loops should be avoided, but this is the only solution that hit my brain right now.
t2=[10:10:100];
pp=repmat([1,0,3,0],[10,1]);
pp(:,4)=-t2;
for i=1:10
R(i,:) =roots(pp(i,:))';
end

Boolean expression F = x'y + xyz':

Using DeMorgan's theorem show that:
a. (A + B)'(A' +B)' = 0
b. A + A'B + A'B' = 1
Boolean expression F = x'y + xyz':
Derive an algebraic expression for the complement F'
Show that F·F' = 0
Show that F + F' = 1
Please Help me
Assuming you know how DeMorgan's law works and you understand the basics of AND, OR, NOT operations:
1.a) (A + B)'(A' + B)' = A'B'(A')'B' = A'B'AB' = A'AB'B' = A'AB' = 0 B' = 0.
I used two facts here that hold for any boolean variable A:
AA' = 0 (when A = 0, A' = 1 and when A = 1, A' = 0 so (AA') has to be 0)
0A = 0 (0 AND anything has to be 0)
1.b) A + A'B + A'B' = A + A'(B + B') = A + A' = 1.
I used the following two facts that hold for any boolean variables A, B and C:
AB + AC = A(B + C) - just like you would do with numeric variables and multiplication and addition. Only here we work with boolean variables and AND (multiplication) and OR (addition) operations.
A + A' = 0 (when A = 0, A' = 0 and when A = 1, A' = 0 so (A + A') has to be 1)
2.a) Let's first derive the complement of F:
F' = (x'y + xyz')' = (x'y)'(xyz')' = (x + y')((xy)' + z) = (x + y')(x' + y' + z) = xx' + xy' + xz + x'y' + y'y' + y'z = 0 + xy' + xz + x'y' + y' + y'z = xy' + xz + y'(x + 1) + y'z = xy' + xz + y' + y'z = xy' + xz + y'(z + 1) = xy' + y' + xz = y'(x + 1) = xz + y'.
There is only one additional fact that I used here, that for any boolean variables A and B following holds:
A + AB = A(B + 1) = A - logically, variable A completely determines the output of such an expression and part AB cannot change the output of entire expression (you can check this one with truth tables if it's not clear, but boolean algebra should be enough to understand). And of course, for any boolean variable A + 1 = A.
2.b) FF' = (x'y + xyz')(xz + y') = x'yxz + x'yy' + xyz'xz + xyz'y'.
x'yxz = (xx')yz = 0xz = 0
xyy'= x0 = 0
xyz'xz = xxy(zz') = xy0 = 0
xyz'y' = xz'(yy') = xz'0 = 0
Therefore, FF' = 0.
2.c) F + F' = x'y + xyz' + xz + y'
This one is not so obvious. Let's start with two middle components and see what we can work out:
xyz' + xz = x(yz' + z) = x(yz' + z(y + y')) = x(yz' + yz + y'z) = x(y(z + z') + y'z) = x(y + y'z) = xy + xy'z.
I used the fact that we can write any boolean variable A in the following way:
A = A(B + B') = AB + AB' as (B + B') evaluates to 1 for any boolean variable B, so initial expression is not changed by AND-ing it together with such an expression.
Plugging this back in F + F' expression yields:
x'y + xy + xy'z + y' = y(x + x') + y'(xz + 1) = y + y' = 1.

Solving for a variable in matlab

I have a system of two equations and I need Matlab to solve for a certain variable. The problem is the variable I need is inside an expression, and trig functions. I wrote the following code:
function [ V1, V2 ] = find_voltages( w1, l1, d, w2, G1, G2, m, v, e, h, a, x)
k1 = sqrt((2*V1*e)/(G1^2*m*v^2));
k2 = sqrt((2*V2*e)/(G2^2*m*v^2));
A = h + l1*a;
b = -A*k1*sin(k1*w1) + a*cos(k1*w1);
B = A*cos(k1*w1) + (a/k1)*sin(k1*w1);
C = B + a*b;
c = C*k2*sinh(k2*w2) + b*cosh(k2*w2);
D = C*cosh(k2*w2) + (b/k2)*sinh(k2*w2);
bd = A*k1*sinh(k1*w1) + a*cosh(k1*w1);
Bd = A*cosh(k1*w1) + (a/k1)*sinh(k1*w1);
Cd = Bd + a*bd;
cd = -Cd*k2*sin(k2*w2) + bd*cos(k2*w2);
Dd = Cd*cos(k2*w2) + (bd/k2)*sin(k2*w2);
fsolve([c*(x-(l1+w1+d+w2)) + D == 0, cd*(x-(l1+w1+d+w2)) + Dd == 0], [V1,V2])
end
and got an error because V1 and V2 are not defined. They are part of an expression, and need to be solved for. Is there a way to do this? Also, is it a problem that the functions I put as arguments to solve are conglomerates of the smaller equations above them?
Valid values:
Drift space 1 (l1): 0.11
Quad 1 length (w1): 0.11
Quad 2 length (w2): 0.048
Separation (d): 0.014
Radius of Separation 1 (G1): 0.016
Radius of Separation 2 (G2): 0.01
Voltage 1 (V1): -588.5
Voltage 2 (V2): 418
Kinetic Energy in eV: 15000
Mass (m) 9.109E-31
Kinetic Energy in Joules (K): 2.4E-15
Velocity (v): 72591415.94
Charge on an Electron (e): 1.602E-19
k1^2=(2*V1*e)/(G1^2*m*v^2): 153.4467773
k2^2=(2*V2*e)/(G2^2*m*v^2): 279.015
First start by re-writing your function as an expression which returns the extent to which your function(s) fail to hold for some valid guess for [V1,V2]. E.g.,
function gap = voltage_eqn(V, w1, l1, d, w2, G1, G2, m, v, e, h, a, x)
V1 = V(1) ;
V2 = V(2) ;
k1 = sqrt((2*V1*e)/(G1^2*m*v^2));
k2 = sqrt((2*V2*e)/(G2^2*m*v^2));
A = h + l1*a;
b = -A*k1*sin(k1*w1) + a*cos(k1*w1);
B = A*cos(k1*w1) + (a/k1)*sin(k1*w1);
C = B + a*b;
c = C*k2*sinh(k2*w2) + b*cosh(k2*w2);
D = C*cosh(k2*w2) + (b/k2)*sinh(k2*w2);
bd = A*k1*sinh(k1*w1) + a*cosh(k1*w1);
Bd = A*cosh(k1*w1) + (a/k1)*sinh(k1*w1);
Cd = Bd + a*bd;
cd = -Cd*k2*sin(k2*w2) + bd*cos(k2*w2);
Dd = Cd*cos(k2*w2) + (bd/k2)*sin(k2*w2);
gap(2) = c*(x-(l1+w1+d+w2)) + D ;
gap(1) = cd*(x-(l1+w1+d+w2)) + Dd ;
end
Then call fsolve from some initial V0:
Vf = fsolve(#(V) voltage_eqn(V, w1, l1, d, w2, G1, G2, q, m, v, e, h, a, x), V0) ;

matlab: minimization/optimization algorithm

I use function with multiple outputs farina4 that computes coefficients a, b, e, f and a vector out_p5tads_final (1 x n array) through a minimization of a system of equations using the data input set p5tads (1 x n array):
function [a b e f fval out_p5tads_final] = farina4(p5tads)
f = #(coeff)calculs_farina4(coeff,p5tads);
[ans,fval] = fminsearchcon(f,coeff0,[0 0 0 0],[1 1 1 1]);% fminsearch with constrains
a = ans(1);
b = ans(2);
e = ans(3);
f = ans(4);
out_p5tads_final = p5tads_farina4(a,b,e,f);
function out_coeff = calculs_farina4(coeff0,p5tads)
%bla-bla-bla
end
function out_p5tads = p5tads_farina4(a,b,e,f)
%bla-bla-bla
end
end
After calculating a, b, e, f and out_p5tads_final I need to calculate/minimize the RMS function with respect to out_p5tads_f4.
RMS = sqrt(mean((p5tads(:) - out_p5tads_f4(:)).^2))*100
and to repeat function farina4 in order to find the optimal set of the parameters a, b, e, f and out_p5tads_final.
I am trying to build up an algorithm of such optimization and do not see a way so far.
For instance, it seems to be not possible to introduce a function with multiple output inside the above RMS equation unless there is a way to index somehow the output of this function farina4.
If there can be an alternative optimization algorithm for RMS without fminsearch (or similar) ?
a b e and f are values between 0 and 1
out_p5tads_final is an (1 x 10) array
%
function out_coeff = calculs_farina4(coeff0,p5tads)
%
mmmm = p5tads(1);
mmmr = p5tads(2);
rmmr = p5tads(3);
mmrm = p5tads(4);
mmrr = p5tads(5);
rmrm = p5tads(6);
rmrr = p5tads(7);
mrrm = p5tads(8);
mrrr = p5tads(9);
rrrr = p5tads(10);
%
a = coeff0(1);
b = coeff0(2);
e = coeff0(3);
f = coeff0(4);
%
f_mmmm = mmmm - ((a^2*b^2*(a + b) + e^2*f^2*(e + f))/2);
f_mmmr = mmmr - (a^2*b^2*(e + f) + e^2*f^2*(a + b));
f_rmmr = rmmr - ((a^2*f^2*(b + e) + b^2*e^2*(a + f))/2);
f_mmrm = mmrm - 2*a*b*e*f;
f_mmrr = mmrr - b*f*(a^3 + e^3) + a*e*(a^3 + f^3);
f_rmrm = rmrm - 2*a*b*e*f;
f_rmrr = rmrr - 2*a*b*e*f;
f_mrrm = mrrm - ((a^2*b^2*(e + f) + e^2*f^2*(a + b))/2);
f_mrrr = mrrr - (a^2*f^2*(b + e) + b^2*e^2*(a + f));
f_rrrr = rrrr - ((a^2*f^2*(a + f) + b^2*e^2*(b + e))/2);
%
out_coeff = f_mmmm^2 + f_mmmr^2 + f_rmmr^2 + f_mmrm^2 + f_mmrr^2 + f_rmrm^2 + f_rmrr^2 + f_mrrm^2 + f_mrrr^2 + f_rrrr^2;
end
%
function out_p5tads = p5tads_farina4(a,b,e,f)
%
p_mmmm = ((a^2*b^2*(a + b) + e^2*f^2*(e + f))/2);
p_mmmr = (a^2*b^2*(e + f) + e^2*f^2*(a + b));
p_rmmr = ((a^2*f^2*(b + e) + b^2*e^2*(a + f))/2);
p_mmrm = 2*a*b*e*f;
p_mmrr = b*f*(a^3 + e^3) + a*e*(a^3 + f^3);
p_rmrm = 2*a*b*e*f;
p_rmrr = 2*a*b*e*f;
p_mrrm = ((a^2*b^2*(e + f) + e^2*f^2*(a + b))/2);
p_mrrr = (a^2*f^2*(b + e) + b^2*e^2*(a + f));
p_rrrr = ((a^2*f^2*(a + f) + b^2*e^2*(b + e))/2);
%
out_p5tads = [p_mmmm,p_mmmr,p_rmmr,p_mmrm,p_mmrr,p_rmrm,p_rmrr,p_mrrm,p_mrrr,p_rrrr];
end
end
Thanks much in advance !
19/08/2014 3:35 pm
I need to get an optimal set of coefficients a b e f that the RMS value , which is calculated from
RMS = sqrt(mean((p5tads(:) - out_p5tads_f4(:)).^2))*100
is minimal. Here, the vector p5tads is used to calculate/optimize the set of a b e f coefficients, which are in turn used to calculate the vector out_p5tads_f4. The code should run a desired number of optimizations cycles (e.g. by default 100) and then select the series of a b e f and out_p5tads_f4 afforded the minimal RMS error value (with respect to out_p5tads_f4).