I'm trying to construct a vector with n-elements. I want each of my elements, x_5, to be defined as such :
x_5 = (x_1)^2 + (x_2)^2 + (x_3)^2 + (x_4)^2
where each x_i = randn. This is the script I am using right now :
x_1 = randn;
x_2 = randn;
x_3 = randn;
x_4 = randn;
for x_5 = (x_1)^2 + (x_2)^2 + (x_3)^2 + (x_4)^2
n=1000000
end
v=repelem(x_5,1000000);
I want to implement a loop command, so that each x_5 corresponds to a specific operation for x_5. That is, each x_5 in my vector is generated by my algorithm individually. With this script, x_5 is generated only once, and my vector is then filled with 1.000.000 copies of this same result. Instead, I would like to make 1.000.000 calculations for x_5, and then fill my vector with these results.
Other suggestions besides making
for x_5 = (x_1)^2 + (x_2)^2 + (x_3)^2 + (x_4)^2
n=1000000
end
work are also welcome. I'm not sure whether a loop command is the best choice here, but I would like to learn about it with this problem.
Your loop iterates only once and the numbers are generated only once. You have to move the logic into the loop:
v = zeros(1, 1000000);
parfor n = 1:length(v)
v(n) = sum(randn(1, 4).^2);
end
You can reduce the code to
v = arrayfun(#(x) sum(randn(1, 4).^2), zeros(1, 1000000));
Related
Recently as a project I have been working on a program in matlab designed to implement the Gauss-Legendre order 4 method of solving an IVP. Numerical analysis- and coding in particular- is somewhat of a weakness of mine, and thus it has been rather tough going. I used the explicit Euler method to initially seed the method, but I've been stuck for a very long time on how to use the Newton method to get closer values of k1 and k2.
Thusfar, my code looks as follows:
`
%Gauss-Butcher Order 4
function[y] = GBOF(f,fprime,y_0,maxit,ertol)
A = [1/4,1/4-sqrt(3)/6;1/4+sqrt(3)/6,1/4];
h = [1,0,0,0,0,0,0,0,0,0,0,0,0];
t = [0,0,0,0,0,0,0,0,0,0,0,0,0];
for n = 1:12
h(n+1) = 1/(2^n);
t(n+1) = t(n)+h(n);
end
y = zeros(size(t));
y(1) = y_0;
niter = 1;
%Declaration of Variables
for i = 1:12
k = f(y(i));
y1approx = y(i) + ((1/2-sqrt(3))*h(i)*k);
y2approx = y(i) + ((1/2+sqrt(3))*h(i)*k);
k1 = f(y1approx);
k2 = f(y2approx);
%Initial guess for newton seeding
errorFunc =#(k1,k2) [k1-f(y(i) +A(1,1)*k1+A(1,2)*k2*h(i)); k2-f(y(i)+A(2,1)*k1+A(2,2)*k2*h(i))];
error = errorFunc(k1,k2);
%Function for error and creation of error variable
while norm(error) > ertol && niter < maxit
niter = niter + 1;
** k1 = k1-f(k1)/fprime(k1);
k2 = k2-f(k2)/fprime(k2);
** error = errorFunc(k1,k2);
%Newton Raphston for estimating K1 and K2
end
y(i+1) = y(i) +(h(i)*(k1+k2)/2);
%Creation of next
end
disp(t);
`
The part of the code I believe is causing this to fail is highlighted. When I enter in a basic ivp (i.e. y' = y, y(0) =1), I get the output
Any input on how I could go about fixing this would be much appreciated.
Thank you.
I have tried replacing the k1s and k2s in the problem with the values used in the formula extrapolated from the butcher tableau, but nothing changed. I can't think of any other ways to tackle this issue.
The implicit system you have to solve is
k1 = f(y + h*(a11*k1 + a12*k2) )
k2 = f(y + h*(a21*k1 + a22*k2) )
This is also correct in your residual function errorFunc.
The naive way is just to iterate this system, like any other fixed-point iteration.
This system has a linearization rel. h at the base point y
k1 = f(y) + h*f'(y)*(a11*k1 + a12*k2) + O(h^2)
k2 = f(y) + h*f'(y)*(a21*k1 + a22*k2) + O(h^2)
Seen as simple iteration, the contraction factor is O(h), so if h is small enough, the factor is smaller than 1 and thus convergence is sure, increasing the order of h in the residual by 1 in each step. So with 6 iterations the error in the implicit system is O(h^6), which is one order smaller than the local truncation error.
One can reduce the number of iterations if k1,k2 start with some higher-order estimates, not just with k1=k2=f(y).
One can reduce the right side residual by removing the terms that are linear in h (on both sides of course).
k1 - h*f'(y)*(a11*k1 + a12*k2) = f(y + h*(a11*k1 + a12*k2) ) - h*f'(y)*(a11*k1 + a12*k2)
k2 - h*f'(y)*(a21*k1 + a22*k2) = f(y + h*(a21*k1 + a22*k2) ) - h*f'(y)*(a21*k1 + a22*k2)
The right side is evaluated at the current values, the left side is a linear system for the new values. So
K = K - solve(M, rhs)
with
K = [ k1; k2]
M = [ 1 - h*f'(y)*a11, -h*f'(y)*a12 ; -h*f'(y)*a21, 1 - h*f'(y)*a22 ]
= I - h*f'(y)*A
rhs = [ k1 - f(y + h*(a11*k1 + a12*k2) ); k2 - f(y + h*(a12*k1 + a12*k2) )
= K - f(Y)
where
Y = y+h*A*K
This should, probably, work for scalar equations, for systems this involves Kronecker products of matrices and vectors.
As the linear part is taken out of the residual, the contraction factor in this new fixed-point iteration is O(h^2), possibly with smaller constants, so it converges faster and, it has been argued, for larger step sizes.
What you have in the code regarding the implicit method step shows the steps of the algorithm in the right order, but with wrong arguments in the function calls.
What you do with h is not recognizable. One could guess that the task is to explore the results of the method for a collection of step sizes. This means that the h for each integration run is constant, halving the step size and increasing the step number for the next integration run.
I'm running a Matlab code in the HPC of my university. I have two versions of the code. The second version, despite generating a smaller array, seems to require more memory. I would like your help to understand if this is in fact the case and why.
Let me start from some preliminary lines:
clear
rng default
%Some useful components
n=7^4;
vectors{1}=[1,20,20,20,-1,Inf,-Inf];
vectors{2}=[-19,19,19,19,-20,Inf,-Inf];
vectors{3}=[-19,0,0,0,-20,Inf,-Inf];
vectors{4}=[-19,0,0,0,-20,Inf,-Inf];
T_temp = cell(1,4);
[T_temp{:}] = ndgrid(vectors{:});
T_temp = cat(4+1, T_temp{:});
T = reshape(T_temp,[],4); %all the possible 4-tuples from vectors{1}, ..., vectors{4}
This is the first version 1 of the code: I construct the matrix D1 listing all possible pairs of unordered rows from T
indices_pairs=pairIndices(n);
D1=[T(indices_pairs(:,1),:) T(indices_pairs(:,2),:)];
This is the second version of the code: I construct the matrix D2 listing a random draw of m=10^6 unordered pairs of rows from T
m=10^6;
p=n*(n-1)/2;
random_indices_pairs = randperm(p, m).';
[C1, C2] = myind2ind (random_indices_pairs, n);
indices_pairs=[C1 C2];
D2=[T(indices_pairs(:,1),:) T(indices_pairs(:,2),:)];
My question: when generating D2 the HPC goes out of memory. When generating D1 the HPC works fine, despite D1 being a larger array than D2. Why is that the case?
These are complementary functions used above:
function indices = pairIndices(n)
[y, x] = find(tril(logical(ones(n)), -1)); %#ok<LOGL>
indices = [x, y];
end
function [R , C] = myind2ind(ii, N)
jj = N * (N - 1) / 2 + 1 - ii;
r = (1 + sqrt(8 * jj)) / 2;
R = N -floor(r);
idx_first = (floor(r + 1) .* floor(r)) / 2;
C = idx_first-jj + R + 1;
end
I would like to use the output of a function as input for a function that builds a polynom:
here is my code:
function c = interpolation(x, y)
n = length(x);
V = ones(n);
for j = 2:n
V(:,j) = x.*V(:,j-1);
end
c = V \ y;
disp(V)
for i = 0:n-1
fprintf('c%d= %.3f\n', i, c(i+1));
end
polynome(c);
function p = polynome(x)
n = length(x);
for l= 0:n-1
polynome = polynome * x^l;
end
end
The first function alone, works. That means my code works if I comment starting from line 13 to end, and I get the c values, whose number depends from the length of the entered x vector in the beginning.
I want to use that c values, to build a polynom of this form: p(x) = c0 + c^1*x1 + c2^x2 + .... + c(n-1)^x(n-1) and plot that polynom, with the points aswell xi,yi given at the beginning through the 2 vectors as input of the function interpolation.
Can someone help me here?
make a separate function polynome e.g.
function y=polynome(x,c)
y=sum(c.*x.^(0:length(c)-1));
or just use
y=sum(c.*x.^(0:length(c)-1));
to compute the polynome for your coefficients c.
If you have multiple values of x, e.g.
x=[0:.1:3]';
y=repmat(x,1,numel(c)).^repmat((0:numel(c)-1),numel(x),1)*c';
should give you the values of the polynome
I have this code for the Composite Simpson's Rule. However, I have been fiddling with it for quite a while and I can't seem to get it to work.
How can I fix this algorithm?
function out = Sc2(func,a,b,N)
% Sc(func,a,b,N)
% This function calculates the integral of func on the interval [a,b]
% using the Composite Simpson's rule with N subintervals.
x=linspace(a,b,N+1);
% Partition [a,b] into N subintervals
fx=func(x);
h=(b-a)/(2*N);
%define for odd and even sums
sum_even = 0;
for i = 1:N-1
x(i) = a + (2*i-2)*h;
sum_even = sum_even + func(x(i));
end
sum_odd = 0;
for i = 1:N+1
x(i) = a + (2*i-1)*h;
sum_odd = sum_odd + func(x(i));
end
% Define the length of a subinterval
out=(h/3)*(fx(1)+ 2*sum_even + 4*sum_odd +fx(end));
% Apply the composite Simpsons rule
end
Well for one thing, your h definition is wrong. h stands for the step size of each interval you want to estimate. You are unnecessarily dividing by 2. Remove that 2 in your h definition. You also are evaluating your function at the values of n not x. You should probably remove this statement because you end up not using this in the end.
Also, you are summing from 1 to N+1 or from 1 to N-1 for either the odd or even values, This is incorrect. Remember, you are choosing every other value in an odd interval, or even interval, so this should really be looping from 1 to N/2 - 1. To escape figuring out what to multiply i with, just skip this and make your loop go in steps of 2. That's besides the point though.
I would recommend that you don't loop over and add up the values for the odd and even intervals that way. You can easily do that by specifying the odd or even values of x and just applying a sum. I would use the colon operator and specify a step size of 2 to exactly determine which values of x for odd or even you want to apply to the overall sum.
You also are declaring x to be your n-point interval, yet you are overwriting those values in your loops. You actually don't need that x declaration in your code in that case.
As such, here's a modified version of your function with the optimizations I have in mind:
function out = Sc2(func, a, b, N)
h = (b – a) / N; %// Width of each interval
odd = 1 : 2 : n-1; %// Define odd interval
xodd = a + h*odd; %// Create odd x values
even = 2 : 2 : n-2; %// Create even interval
xeven = a + h*even; % Create even x values
%// Return area
out = (h/3)*(func(a) + 4*sum(func(xodd)) + 2*sum(func(xeven))+ func(b));
However, if you want to get your code working, you simply have to change your for loop iteration limits as well as your value of h. You also have to remove some lines of code, and change some variable names. Therefore:
function out = Sc2(func,a,b,N)
% Sc(func,a,b,N)
% This function calculates the integral of func on the interval [a,b]
% using the Composite Simpson's rule with N subintervals.
%// Define width of each segment
h = (b - a) / N; %// Change
%//define for odd and even sums
sum_even = 0;
for i = 2 : 2 : N-2 %// Change
x = a + i*h; %// Change
sum_even = sum_even + func(x);
end
sum_odd = 0;
for i = 1 : 2 : N-1 %// Change
x = a + i*h %// Change
sum_odd = sum_odd + func(x);
end
%// Output area
out = (h / 3)*(func(a) + 2*sum_even + 4*sum_odd + func(b)); %// Change
end
Let x = [1,...,t] be a vector with t components and A and P arrays. I asked myself whether there is any chance to shorten this, as it looks very cumbersome:
for n = 1:t
for m = 1:n
H(n,m) = A(n,m) + x(n) * P(n,m)
end
end
My suggestion: bsxfun(#times,x,P) + A;
e.g.
A = rand(3);
P = rand(3);
x = rand(3,1);
for n = 1:3
for m = 1:3
H(n,m) = A(n,m) + x(n) * P(n,m);
end
end
H2 = bsxfun(#times,x,P) + A;
%//Check that they're the same
all(H(:) == H2(:))
returns
ans = 1
EDIT:
Amro is right! To make the second loop is dependent on the first use tril:
H2 = tril(bsxfun(#times,x,P) + A);
Are the matrices square btw because that also creates other problems
tril(A + P.*repmat(x',1,t))
EDIT. This is for when x is row vector.
If x is a column vector, then use tril(A + P.*repmat(x,t,1))
If your example code is correct, then H(i,j) = 0 for any j > i, e.g. X(1,2).
For t = 3 for example, you would have.
H =
'A(1,1) + x(1) * P(1,1)' [] []
'A(2,1) + x(2) * P(2,1)' 'A(2,2) + x(2) * P(2,2)' []
'A(3,1) + x(3) * P(3,1)' 'A(3,2) + x(3) * P(3,2)' 'A(3,3) + x(3) * P(3,3)'
Like I pointed out in the comments, unless it was a typo mistake, the second for-loop counter depends on that of the first for-loop...
In case it was intentional, I came up with the following solution:
% some random data
t = 10;
x = (1:t)';
A = rand(t,t);
P = rand(t,t);
% double for-loop
H = zeros(t,t);
for n = 1:t
for m = 1:n
H(n,m) = A(n,m) + x(n) * P(n,m);
end
end
% vectorized using linear-indexing
[a,b] = ndgrid(1:t,1:t);
idx = sub2ind([t t], nonzeros(tril(a)), nonzeros(tril(b)));
xidx = nonzeros(tril(a));
HH = zeros(t);
HH(tril(true(t))) = A(idx) + x(xidx).*P(idx);
% check the results are the same
assert(isequal(H,HH))
I like #Dan's solution better. The only advantage here is that I do not compute unnecessary values (since the upper half of the matrix is zeros), while the other solution computes the full matrix and then cut back the extra stuff.
A good start would be
H = A + x*P
This may not be a working solution, you'll have to check conformability of arrays and vectors, and make sure that you're using the correct multiplication, but this should be enough to point you in the right direction. If you're new to Matlab be aware that vectors can be either 1xn or nx1, ie row and column vectors are different species unlike in so many programming languages. If x isn't what you want on the rhs, you may want its transpose, x' in Matlab.
Matlab is, from one point of view, an array language, explicit loops are often unnecessary and frequently not even a good way to go.
Since the range for second loop is 1:n, you can take the lower triangle parts of matrices A and P for calculation
H = bsxfun(#times,x(:),tril(P)) + tril(A);