Approximation of cosh and sinh functions that give large values in MATLAB - matlab

My calculation involves cosh(x) and sinh(x) when x is around 700 - 1000 which reaches MATLAB's limit and the result is NaN. The problem in the code is elastic_restor_coeff rises when radius is small (below 5e-9 in the code). My goal is to do another integral over a radius distribution from 1e-9 to 100e-9 which is still a work in progress because I get stuck at this problem.
My work around solution right now is to approximate the real part of chi_para with a step function when threshold2 hits a value of about 300. The number 300 is obtained from using the lowest possible value of radius and look at the cut-off value from the plot. I think this approach is not good enough for actual calculation since this value changes with radius so I am looking for a better approximation method. Also, the imaginary part of chi_para is difficult to approximate since it looks like a pulse instead of a step.
Here is my code without an integration over a radius distribution.
k_B = 1.38e-23;
T = 296;
radius = [5e-9,10e-9, 20e-9, 30e-9,100e-9];
fric_coeff = 8*pi*1e-3.*radius.^3;
elastic_restor_coeff = 8*pi*1.*radius.^3;
time_const = fric_coeff/elastic_restor_coeff;
omega_ar = logspace(-6,6,60);
chi_para = zeros(1,length(omega_ar));
chi_perpen = zeros(1,length(omega_ar));
threshold = zeros(1,length(omega_ar));
threshold2 = zeros(1,length(omega_ar));
for i = 1:length(radius)
for k = 1:length(omega_ar)
omega = omega_ar(k);
fric_coeff = 8*pi*1e-3.*radius(i).^3;
elastic_restor_coeff = 8*pi*1.*radius(i).^3;
time_const = fric_coeff/elastic_restor_coeff;
G_para_func = #(t) ((cosh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))-1).*exp(1i.*omega.*t))./(cosh(2*k_B*T./elastic_restor_coeff)-1);
G_perpen_func = #(t) ((sinh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))).*exp(1i.*omega.*t))./(sinh(2*k_B*T./elastic_restor_coeff));
chi_para(k) = (1 + 1i*omega*integral(G_para_func, 0, inf));
chi_perpen(k) = (1 + 1i*omega*integral(G_perpen_func, 0, inf));
threshold(k) = 2*k_B*T./elastic_restor_coeff*omega;
threshold2(k) = 2*k_B*T./elastic_restor_coeff*(omega*time_const - 1);
end
figure(1);
semilogx(omega_ar,real(chi_para),omega_ar,imag(chi_para));
hold on;
figure(2);
semilogx(omega_ar,real(chi_perpen),omega_ar,imag(chi_perpen));
hold on;
end
Here is the simplified function that I would like to approximate:
where x is iterated in a loop and the maximum value of x is about 700.

Related

Summing Values based on Area in Matlab

Im trying to write a code in Matlab to calculate an area of influence type question. This is an exert from my data (Weighting, x-coord, y-coord):
M =
15072.00 486.00 -292
13269.00 486.00 -292
12843.00 414.00 -267
10969.00 496.00 -287
9907.00 411.00 -274
9718.00 440.00 -265
9233.00 446.00 -253
9138.00 462.00 -275
8830.00 496.00 -257
8632.00 432.00 -253
R =
-13891.00 452.00 -398
-13471.00 461.00 -356
-12035.00 492.00 -329
-11309.00 413.00 -353
-11079.00 467.00 -375
-10659.00 493.00 -333
-10643.00 495.00 -338
-10121.00 455.00 -346
-9795.00 456.00 -367
-8927.00 485.00 -361
-8765.00 467.00 -351
I want to make a function to calculate the sum of the weightings at any given position based on a circle of influence of 30 for each coordinate.
I have thought of using a for loop to calculate each point independently and summing the result but seems unnecessarily complicated and inefficient.
I also thought of assigning an intensity of color to each circle and overlaying them but I dont know how to change color intensity based on value here is my attempt so far (I would like to have a visual of the result):
function [] = Influence()
M = xlsread('MR.xlsx','A4:C310');
R = xlsread('MR.xlsx','E4:G368');
%these are my values around 300 coordinates
%M are negative values and R positive, I want to see which are dominant in their regions
hold on
scatter(M(:,2),M(:,3),3000,'b','filled')
scatter(R(:,2),R(:,3),3000,'y','filled')
axis([350 650 -450 -200])
hold off
end
%had to use a scalar of 3000 for some reason as it isnt correlated to the graph size
I'd appreciate any ideas/solutions thank you
This is the same but with ca. 2000 data points
How about this:
r_influence = 30; % radius of influence
r = #(p,A) sqrt((p(1)-A(:,2)).^2 + (p(2)-A(:,3)).^2); % distance
wsum = #(p,A) sum(A(find(r(p,A)<=r_influence),1)); % sum where distance less than roi
% compute sum on a grid
xrange = linspace(350,550,201);
yrange = linspace(-200,-450,201);
[XY,YX] = meshgrid(xrange,yrange);
map_M = arrayfun(#(p1,p2) wsum([p1,p2],M),XY,YX);
map_R = arrayfun(#(p1,p2) wsum([p1,p2],R),XY,YX);
figure(1);
clf;
imagesc(xrange,yrange,map_M + map_R);
colorbar;
Gives a picture like this:
Is that what you are looking for?

How to create nonlinear spaced vector in Matlab?

I'm trying to create a contour plot with focus around a particular finite range from 1 to 1.05. At the same time, I need very high resolution closer to 1. I thought I could use something like the following but the spacing still looks linear
out=exp(linspace(log(1),log(1.05),100))
plot(diff(out))
What is the best way to enhance the nonlinearity of the spacing when the bounds are so tight? Again, I need to maintain high density near 1 with the resolution tapering off in a nonlinear way. I have a few ideas but I thought someone might have a quick 2 liner or something of the sort.
instead of applying the function f(x) = ex, to get a 'steeper' non-linearity, rather apply f(x) = eax
n = 20;
a = 100;
lower = 1;
upper = 1.05;
temp = exp(linspace(log(1)*a,log(1.05)*a,n))
% re-scale to be between 0 and 1
temp_01 = temp/max(temp) - min(temp)/max(temp)
% re-scale to be between your limits (i.e. 1 and 1.05)
out = temp_01*(upper-lower) + lower
now plot(diff(out),diff(out),'o') produces
Note that you can use the exact same scaling scheme above with logspace so just use
temp = logspace(...)
and then the rest is the same
You can generate a logarithmic distribution between, for example, 1 and 1000 and then scale it back to [1, 1.05]:
out = logspace(0, 3, 100);
out = ( (out-min(out(:)))*(1.05-1) ) / ( max(out(:))-min(out(:)) ) + 1;
Result:
plot(diff(out));

Reverse-calculating original data from a known moving average

I'm trying to estimate the (unknown) original datapoints that went into calculating a (known) moving average. However, I do know some of the original datapoints, and I'm not sure how to use that information.
I am using the method given in the answers here: https://stats.stackexchange.com/questions/67907/extract-data-points-from-moving-average, but in MATLAB (my code below). This method works quite well for large numbers of data points (>1000), but less well with fewer data points, as you'd expect.
window = 3;
datapoints = 150;
data = 3*rand(1,datapoints)+50;
moving_averages = [];
for i = window:size(data,2)
moving_averages(i) = mean(data(i+1-window:i));
end
length = size(moving_averages,2)+(window-1);
a = (tril(ones(length,length),window-1) - tril(ones(length,length),-1))/window;
a = a(1:length-(window-1),:);
ai = pinv(a);
daily = mtimes(ai,moving_averages');
x = 1:size(data,2);
figure(1)
hold on
plot(x,data,'Color','b');
plot(x(window:end),moving_averages(window:end),'Linewidth',2,'Color','r');
plot(x,daily(window:end),'Color','g');
hold off
axis([0 size(x,2) min(daily(window:end))-1 max(daily(window:end))+1])
legend('original data','moving average','back-calculated')
Now, say I know a smattering of the original data points. I'm having trouble figuring how might I use that information to more accurately calculate the rest. Thank you for any assistance.
You should be able to calculate the original data exactly if you at any time can exactly determine one window's worth of data, i.e. in this case n-1 samples in a window of length n. (In your case) if you know A,B and (A+B+C)/3, you can solve now and know C. Now when you have (B+C+D)/3 (your moving average) you can exactly solve for D. Rinse and repeat. This logic works going backwards too.
Here is an example with the same idea:
% the actual vector of values
a = cumsum(rand(150,1) - 0.5);
% compute moving average
win = 3; % sliding window length
idx = hankel(1:win, win:numel(a));
m = mean(a(idx));
% coefficient matrix: m(i) = sum(a(i:i+win-1))/win
A = repmat([ones(1,win) zeros(1,numel(a)-win)], numel(a)-win+1, 1);
for i=2:size(A,1)
A(i,:) = circshift(A(i-1,:), [0 1]);
end
A = A / win;
% solve linear system
%x = A \ m(:);
x = pinv(A) * m(:);
% plot and compare
subplot(211), plot(1:numel(a),a, 1:numel(m),m)
legend({'original','moving average'})
title(sprintf('length = %d, window = %d',numel(a),win))
subplot(212), plot(1:numel(a),a, 1:numel(a),x)
legend({'original','reconstructed'})
title(sprintf('error = %f',norm(x(:)-a(:))))
You can see the reconstruction error is very small, even using the data sizes in your example (150 samples with a 3-samples moving average).

Setting up a matrix of time and space for a function

I'm writing a program in matlab to observe how a function evolves in time. I'd like to set up a matrix that fills its first row with the initial function, and progressively fills more rows based off of a time derivative (that's dependent on the spatial derivative). The function is arbitrary, the program just needs to 'evolve' it. This is what I have so far:
xleft = -10;
xright = 10;
xsampling = 1000;
tmax = 1000;
tsampling = 1000;
dt = tmax/tsampling;
x = linspace(xleft,xright,xsampling);
t = linspace(0,tmax,tsampling);
funset = [exp(-(x.^2)/100);cos(x)]; %Test functions.
funsetvel = zeros(size(funset)); %The functions velocities.
spacetimevalue1 = zeros(length(x), length(t));
spacetimevalue2 = zeros(length(x), length(t));
% Loop that fills the first functions spacetime matrix.
for j=1:length(t)
funsetvel(1,j) = diff(funset(1,:),x,2);
spacetimevalue1(:,j) = funsetvel(1,j)*dt + funset(1,j);
end
This outputs the error, Difference order N must be a positive integer scalar. I'm unsure what this means. I'm fairly new to Matlab. I will exchange the Euler-method for another algorithm once I can actually get some output along the proper expectation. Aside from the error associated with taking the spatial derivative, do you all have any suggestions on how to evaluate this sort of process? Thank you.

I'm timing a function that has a runtime of O(n^3) but my timing results do not show this, why is this happening?

numberOfTrials = 10;
numberOfSizes = 6;
sizesArray = zeros(numberOfSizes, 1);
randomMAveragesArray = zeros(numberOfSizes, 1);
for i=1:numberOfSizes
N = 2^i;
%x=rand(N,1);
randomMTimesArray = zeros(numberOfTrials, 1);
for j=1:numberOfTrials
tic;
for k=1:N^3
x = .323452345e-999 * .98989898989889e-953;
end
randomMTimesArray(j) = toc;
end
sizesArray(i) = N;
end
randomMPolyfit = polyfit(log10(sizesArray), log10(randomMAveragesArray), 1);
randomMSlope = randomMPolyfit(1);
That is my Matlab script. I was originally timing a NxN random matrix using '\' to solve. The runtime on this is O(n^3). But my slope for the log graph was always about 1.8.
My understanding from this is that the timing results are O(n^k) where k is the slope from the log/log graph. So therefore the slope I should get should be around 3.
The code I posted above I have made an arbitrary loop that is N^3 with a floating point operation to test if this works.
However with the for loop I'm getting a slope of 2.5.
Why is this?
Since the O() behavior is asymptotic, you sometimes cannot see the behavior for small values of N. For example, if I set numberOfSizes = 9 and discard the first 3 points for the polynomial fit, the slope is much closer to 3:
randomMPolyfit = polyfit(log10(sizesArray(4:end)), log10(randomMAveragesArray(4:end)), 1);
randomMSlope = randomMPolyfit(1)
randomMSlope =
2.91911869082081
If you plot the timing array this behavior is clearer.