I've got a silly question. I took a vector math class about 10 years ago and could've sworn I remembered an operation that allowed me to multiply the values of a vector together like so:
Vector3 v1 = new Vector3(1, 0, 2)
Vector3 v2 = new Vector3(5, 5, 5)
//Vector3 v3 = SomeVectorOperation(v1, v2) = (1 * 5, 0 * 5, 2 * 5)
Now in reviewing all my notes, and everything I can find online, it doesn't look like this is a common operation at all. Of course I can write a function that does it:
Vector3 VectorMult(Vector3 v1, Vector3 v2) {
return new Vector3(v1.x * v2.x, v1.y * v2.y, v1.z * v2.z);
}
So far I've found at least a few instances where an operation like this would be helpful, so I'm not sure why it wouldn't exist already in some form. So, I guess I have two questions:
Is there an easier way to get the result I'm looking for than making my own custom function?
Is there a reason why there is no standard vector operation like this to begin with?
Thank you very much for your time!
When we electrical engineers want to sound smart, we call it the Hadamard product, but otherwise it’s just the “element-wise product”.
What library are you using? GLSL? Eigen? GSL? We can look for how to do element-wise multiplication in it. (It can often be accelerated using SIMD, so an optimized implementation provided by a library will be faster than your hand-rolled function.)
Edit: Unity calls this Vector3.Scale: “Multiplies two vectors component-wise.”
Related
I'd like to declare first of all, that I'm a mathematician. This might be a stupid stupid question; but I've gone through all the matlab tutorials--they've gotten me nowhere. I imagine I could code this in C (it'd be exhausting); but I need matlab for this particular function. And I don't get exactly how to do it.
Here is the pasted Matlab code of where I'm running into trouble:
function y = TAU(z,n)
y=0;
for i =[1,n]
y(z) = log(beta(z+1,i) + y(z+1)) - beta(z,i);
end
end
(beta is an arbitrary "float" to "float" function with an index i.)
I'm having trouble declaring y as a function, in which we call the function at a different argument. I want to define y_n(z) with something something y_{n-1}(z+1). This is all done in a recursive process to create the function. I really feel like I'm missing something stupid.
As a default function it assigns y to be an array (or whatever you call the default index assignment). But I don't want an array. I want y to be assigned as a "function" class (i.e. takes "float" to "float"). And then I'm defining a sequence of y_n : "float" to "float". So that z to z+1 is a map on "float" to "float".
I don't know if I'm asking too much of matlab...
Help a poor mathematician who hasn't coded since the glory days of X-box mods.
...Please don't tell me I have to go back to Pari-GP/C drawing boards over something so stupid.
Please help!
EDIT: At rahnema1 & mimocha's request, I'll describe the math, and of what I am trying to do with my program. I can't see how to implement latex in here. So I'll write the latex code in a generator and upload a picture. I'm not so sure if there even is a work around to what I want to do.
As to the expected output. We'd want,
beta(z+1,i) + TAU(z+1,i) = exp(beta(z,i) + TAU(z,i+1))
And we want to grow i to a fixed value n. Again, I haven't programmed in forever, so I apologize if I'm speaking a little nonsensically.
EDIT2:
So, as #rahnema1 suggests; I should produce a reproducible example. In order to do this, I'll write the code for my beta function. It's surprisingly simple. This is for the case where the "multiplier" variable is set to log(2); but you don't need to worry about any of that.
function f = beta(z,n)
f=0;
for i = 0:n-1
f = exp(f)/(1+exp(log(2)*(n-i-z)));
end
end
This will work fine for z a float no greater than 4. Once you make z larger it'll start to overflow. So for example, if you put in,
beta(2,100)
1.4242
beta(3,100)
3.3235
beta(3,100) - exp(beta(2,100))/(1/4+1)
0
The significance of the 100, is simply how many iterations we perform; it converges fast so even setting this to 15 or so will still produce the same numerical accuracy. Now, the expected output I want for TAU is pretty straight forward,
TAU(z,1) = log(beta(z+1,1)) - beta(z,1)
TAU(z,2) = log(beta(z+1,2) + TAU(z+1,1)) - beta(z,2)
TAU(z,3) = log(beta(z+1,3) + TAU(z+1,2)) - beta(z,3)
...
TAU(z,n) = log(beta(z+1,n) + TAU(z+1,n-1)) -beta(z,n)
I hope this helps. I feel like there should be an easy way to program this sequence, and I must be missing something obvious; but maybe it's just not possible in Matlab.
At mimocha's suggestion, I'll look into tail-end recursion. I hope to god I don't have to go back to Pari-gp; but it looks like I may have to. Not looking forward to doing a deep dive on that language, lol.
Thanks, again!
Is this what you are looking for?
function out = tau(z,n)
% Ends recursion when n == 1
if n == 1
out = log(beta(z+1,1)) - beta(z,1);
return
end
out = log(beta(z+1,n) + tau(z+1,n-1)) - beta(z,n);
end
function f = beta(z,n)
f = 0;
for i = 0:n-1
f = exp(f) / (1 + exp(log(2)*(n-i-z)));
end
end
This is basically your code from the most recent edit, but I've added a simple catch in the tau function. I tried running your code and noticed that n gets decremented infinitely (no exit condition).
With the modification, the code runs successfully on my laptop for smaller integer values of n, where 1e5 > n >= 1; and for floating values of z, real and complex. So the code will unfortunately break for floating values of n, since I don't know what values to return for, say, tau(1,0) or tau(1,0.9). This should easily be fixable if you know the math though.
However, many of the values I get are NaNs or Infs. So I'm not sure if your original problem was Out of memory error (infinite recursion), or values blowing up to infinity / NaN (numerical stability issue).
Here is a quick 100x100 grid calculation I made with this code.
Then I tested on negative values of z, and found the imaginary part of the output to looks kinda cool.
Not to mention I'm slightly geeking out over the fact that pi is showing up in the imaginary part as well :)
tau(-0.3,2) == -1.45179335740446147085 +3.14159265358979311600i
I’m running RStudio Version 1.1.419 with R-3.4.3 on Windows 10. I am trying to fit an (f)arima model and setting the fractional differencing parameter during the optimization process to be between (-0.5,0.5), i.e. allowing for antipersistence (d < 0), short memory (d = 0) and long memory (d > 0). I have tried multiple functions to accomplish that. I am aware that the default of fracdiff$drange is (0,0.5). Therefore this ...
> result <- fracdiff(MeanPrice, nar = 2, nma = 1, drange = c(-0.5,0.5))
sadly returns this..
Warning: C fracdf() optimization failure
Warning message: unable to compute correlation matrix; maybe change 'h'
Is there a way to fit fracdiff or other models (maybe arfima::arfima()?) with that drange? Your help is very much appreciated.
If you look at the package documentation, it states that the h argument for fracdiff "is used to compute a finite difference approximation to the Hessian, and
hence only influences the cov, cor, and std.error computations." However, as they are referring to the Hessian, I would assume that this affects the results of the MLE. There are other functions in that package that may be helpful: fdGHP for estimating the order of fractional differencing based on the Geweke and Porter-Hudak method, and similarly fdSperio.
Take a look at the forecast package. If you estimate the order of fractional differencing using the above mentioned functions, you might be able to use the same method described in the details of the arfima function.
I am trying to minimize a highly non-linear function by optimizing three unknown parameters a, b, and c0. I'm attempting to replicate some governing equations of a casino roulette ball in Python 3.
Here is the link to the research paper:
http://www.dewtronics.com/tutorials/roulette/documents/Roulette_Physik.pdf
I will be referencing equations (35) and (40) in the paper.
Basically, I take stopwatch lap measurements of the roulette ball spinning on the wheel. For each successive lap, the lap time will increase because of losses of momentum to non-conservative forces of friction. Then I take these time measurements and fit equation (35) using a Levenberg-Marquardt least squares method in equation (40).
My question is twofold:
(1) I'm using the scipy.optimize.least_squares() method='lm', and I'm not sure how to write the objective function! Right now I have the function written exactly as is in the paper:
def fall_time(k,a,b,c0):
F = (1 / (a * b)) * (c0 - np.arcsinh(c0) * np.exp(a * k * 2 * np.pi))
return F
def parameter_estimation_function(x0,tk):
a = x0[0]
b = x0[1]
c0 = x0[2]
S = 0
for i,t in enumerate(tk):
k = i + 1
S += (t - fall_time(k,a,b,c0))**2
return [S,1,1]
sol = least_squares(parameter_estimation_function,[0.1,0.8,-0.1],args=([tk1]),method='lm',jac='2-point',max_nfev=2000)
print(sol)
Now, in the documentation examples, I never saw the objective function written the way I have it. In the documentation, the objective function is always returns the residual, not the square of the residual. Additionally, in the documentation they never use the sum! So I'm wondering if the sum and the square are automatically handled under the hood of least_squares()?
(2) Perhaps my second question is a result of my failure to understand how to write the objective function. But anyhow, I'm having trouble getting the algorithm to converge on the minimum. I know this is because the levenberg alogrithm is "greedy" and stops near the closest minima, but I figured that I would be able to at least converge on about the same result given different initial guesses. With slight alterations in the initial guess, I'm getting parameter results with different signs. Additionally, I've yet to find a combination of initial guesses that allows the algo to converge! It always times out before it finds the solution. I've even increased the amount of function evaluations to 10,000 to see if it would. To no avail!
Perhaps somebody could shed some light on my mistakes here! I'm still relatively new to python and the scipy library!
Here is some sample data for tk that I've measured myself from the video here: https://www.youtube.com/watch?v=0Zj_9ypBnzg
tk = [0.52,1.28,2.04,3.17,4.53,6.22]
tk1 = [0.51,1.4,2.09,3,4.42,6.17]
tk2 = [0.63,1.35,2.19,3.02,4.57,6.29]
tk3 = [0.63,1.39,2.23,3.28,4.70,6.32]
tk4 = [0.57,1.4,2.1,3.06,4.53,6.17]
Thanks
1) Yes, as you suspected the sum and the square of the residuals are automatically handled.
2) Hard to say, since I'm not deeply familiar with the problem (e.g., how many local minima exist, what constitutes a 'reasonable' result, etc.). I may investigate more later.
But for kicks I fiddled with some of the values to see what would happen. For example, you can just replace the 1/b constant with a standalone variable b_inv, and this seemed to stabilize the results quite a bit. Here's the code I used to check results. (Note that I rewrote the objective function for brevity. It simply leverages the element-wise operations of numpy arrays, without changing the overall result.)
import numpy as np
from scipy.optimize import least_squares
def fall_time(k,a,b_inv,c0):
return (b_inv / a) * (c0 - np.arcsinh(c0) * np.exp(a * k * 2 * np.pi))
def parameter_estimation_function(x,tk):
return np.asarray(tk) - fall_time(k=np.arange(1,len(tk)+1), a=x[0],b_inv=x[1],c0=x[2])
tk_samples = [
[0.52,1.28,2.04,3.17,4.53,6.22],
[0.51,1.4,2.09,3,4.42,6.17],
[0.63,1.35,2.19,3.02,4.57,6.29],
[0.63,1.39,2.23,3.28,4.70,6.32],
[0.57,1.4,2.1,3.06,4.53,6.17]
]
for i in range(len(tk_samples)):
sol = least_squares(parameter_estimation_function,[0.1,1.25,-0.1],
args=(tk_samples[i],),method='lm',jac='2-point',max_nfev=2000)
print(sol.x)
with console output:
[ 0.03621789 0.64201913 -0.12072879]
[ 3.59319972e-02 1.17129458e+01 -6.53358716e-03]
[ 3.55516005e-02 1.48491493e+01 -5.31098257e-03]
[ 3.18068316e-02 1.11828091e+01 -7.75329834e-03]
[ 3.43920725e-02 1.25160378e+01 -6.36307506e-03]
Right, I'm really pretty new to the concept of vectorization but I'm trying to get head round it. Currently, I'm trying to adapt some of the code that I wrote to implement canny edge detection into a vectorized form and what I don't understand is why this:
for r=1:fsize
for c=1:fsize
mask(r,c) = mask(r,c)/Z;
end
end
produces a different result to this:
mask(r:fsize,c:fsize) = mask(r:fsize,c:fsize)/Z;
When my understanding is that they should do the same thing?
What is r, what is c in the second solution? Probably you need element-wise division ./:
mask = mask./Z;
If this does not solve your problem, please provide input data to reproduce.
for r=1:fsize
for c=1:fsize
mask(r,c) = mask(r,c)/Z;
end
end
Is equivalent to
mask(1:fsize, 1:fsize) = mask(1:fsize, 1:fsize) / Z;
Note - 1:fsize not c:fsize.
This is assuming that Z is a constant. It would be marginally faster to do * (1 / Z) - doing the division just once, and then multiplying...
I'm trying to compute the log of the mean of some very small values. For the current data set, the extreme points are
log_a=-1.6430e+03;
log_b=-3.8278e+03;
So in effect I want to compute (a+b) / 2, or log((a+b)/2) since I know (a+b)/2 is too small to store as a double.
I considered trying to pad everything by a constant, so that instead of storing log_a I'd store log_a+c, but it seems that aand b are far enough apart that in order to pad log_b enough to make exp(log_b+c) computable, I'd end up making exp(log_a+c) too large.
Am I missing some obvious way to go about this computation? As far as I know MATLAB won't let me use anything but double precision, so I'm stumped as to how I can do this simple computation.
EDIT: To clarify: I can compute the exact answer for these specific values. For other runs of the algorithm, the values will be different and might be closer together. So far there have been some good suggestions for approximations; if an exact solution isn't practical, are there any other approximations for more general numbers/magnitudes of values?
Mystical has the right idea but for a more general solution that gives you the log of the arithmetic mean of a vector log_v of numbers already in the log domain use:
max_log = max(log_v);
logsum = max_log + log(sum(exp(log_v-max_log)));
logmean = logsum - log(length(log_v));
This is a common problem in statistical machine learning, so if you do a Google search for logsum.m you'll find a few different versions of MATLAB functions that researchers have written for this purpose. For example, here's a Github link to a version that uses the same calling conventions as sum.
Well, exp(log_b) is so much smaller than exp(log_a) that you can completely ignore that term and still get the correct answer with respect to double-precision:
exp(log_a) = 2.845550077506*10^-714
exp(log_b) = 4.05118588390*10^-1663
If you are actually trying to compute (exp(log_a) + exp(log_b)) / 2, the answer would underflow to zero anyways. So it wouldn't really matter unless you're trying to take another logarithm at the end.
If you're trying compute:
log((exp(log_a) + exp(log_b)) / 2)
Your best bet is to examine the difference between log_a and log_b. If the difference is large, then simply take the final value as equal to the larger term - log(2) since the smaller term will be small enough to completely vanish.
EDIT:
So your final algorithm could look like this:
Check the magnitudes. If abs(log_a - log_b) > 800. Return max(log_a,log(b)) - log(2).
Check either magnitude (they will be close together at this point.). If it is much larger or smaller than 1, add/subtract a constant from both log_a and log_b.
Perform the calculation.
If the values were scaled in step 2. Scale the result back.
EDIT 2:
Here's an even better solution:
if (log_a > log_b)
return log_a + log(1 + exp(log_b - log_a)) - log(2)
else
return log_b + log(1 + exp(log_a - log_b)) - log(2)
This will work if log_a and log_b are not too large or are negative.
Well, if you don't like my previous suggestion of completely changing platforms and are looking for an approximation, why not just use the geometric mean (exp((log_a+log_b)/2) instead?
Use http://wolframalpha.com . For example, as discussed by Mysticial, your calculation of
log(exp(-1.6430e+03) + exp(-3.8278e+03)/2) is approximately equal to log_a. More precisely it equals...
1642.9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999288154175193167154874243862288962865800888654829363675488466381404578225092913407982036991983506370017587380105049077722517705727311433458060227246074261903850589008701929721367650576354241270720062760800558681236724831345952032973775644175750495894596292205385323394564549904750849335403418234531787942293155499938538026848481952030717783105220543888597195156662520697417952130625416040662694927878196360733032741542418365527213770518383992577797346467266676866552563022498785887306273550235307330535082355570343750317349638125974233837177558240980392298326807001406291035229026016040567173260205109683449441154277953394697235601979288239733693137185710713089424316870093563207034737497769711306780243623361236030692934897720786516684985651633787662244416960982457075265287065358586526093347161275192566468776617613339812566918101457823704547101340270795298909954224594...