I found .8xp files that I can download onto my calculator to solve a system of complex equations, but it is useless when I'm just trying to solve one equation. The solve feature on the calculator only works with non-complex numbers, so that is useless to me as well. I would like to know if there are any files out there that I can download so I can solve a single complex equation on the TI-84 Plus C Silver Edition calculator. I've searched for about 2 hours now and I haven't found anything.
Note: I have been using Wolfram Alpha for this, but I can't use that on exams, so that's why I need it specifically for this calculator.
Related
I have been looking for a Matlab function that can do a nonlinear total least square fit, basically fit a custom function to data which has errors in all dimensions. The easiest case being x and y data-points with different given standard deviations in x and y, for every single point. This is a very common scenario in all natural sciences and just because most people only know how to do a least square fit with errors in y does not mean it wouldn't be extremely useful. I know the problem is far more complicated than a simple y-error, this is probably why most (not even physicists like myself) learned how to properly do this with multidimensional errors.
I would expect that a software like matlab could do it but unless I'm bad at reading the otherwise mostly useful help pages I think even a 'full' Matlab license doesn't provide such fitting functionality. Other tools like Origin, Igor, Scipy use the freely available fortran package "ODRPACK95", for instance. There are few contributions about total least square or deming fits on the file exchange, but they're for linear fits only, which is of little use to me.
I'd be happy for any hint that can help me out
kind regards
First I should point out that I haven't practiced MATLAB much since I graduated last year (also as a Physicist). That being said, I remember using
lsqcurvefit()
in MATLAB to perform non-linear curve fits. Now, this may, or may not work depending on what you mean by custom function? I'm assuming you want to fit some known expression similar to one of these,
y = A*sin(x)+B
y = A*e^(B*x) + C
It is extremely difficult to perform a fit without knowning the form, e.g. as above. Ultimately, all mathematical functions can be approximated by polynomials for small enough intervals. This is something you might want to consider, as MATLAB does have lots of tools for doing polynomial regression.
In the end, I would acutally reccomend you to write your own fit-function. There are tons of examples for this online. The idea is to know the true solution's form as above, and guess on the parameters, A,B,C.... Create an error- (or cost-) function, which produces an quantitative error (deviation) between your data and the guessed solution. The problem is then reduced to minimizing the error, for which MATLAB has lots of built-in functionality.
This question already has an answer here:
How to set the objective function when using linprog in Matlab to solve a system of linear inequalities? [closed]
(1 answer)
Closed 6 years ago.
I want to solve a system of linear inequalities in Matlab, where the unknowns are x(1), x(2), x(3), x(4). I want the entire set of solutions x(1), x(2), x(3), x(4). Hence, I can't use linprog because it gives me just one feasible point.
Clarification: This question https://stackoverflow.com/questions/37258835/how-to-set-the-objective-function-when-using-linprog-in-matlab-to-solve-a-system was about linprogr which however gives only one possible solution. Here I'm asking how to find the entire set of solutions.
This is the set of inequalities. Any suggestion?
5x(1)+3x(2)+3x(3)+5x(4)<5
-5x(1)-3x(2)-3x(3)-5x(4)<-3
-x(2)-x(3)<0
x(2)+x(3)<1
-x(1)-x(4)<0
x(1)+x(4)<1
-3x(3)-5x(4)<-1
3x(3)+5x(4)>3
x(3)<1
-x(3)<0
x(4)<1
-x(4)<0
-5x(1)-3x(2)<0
5x(1)+3x(2)<2
x(2)<1
-x(2)<0
x(1)<1
-x(1)<0
With continuous variables we basically have zero, one or infinitely many solutions. Of course showing all the solutions is not possible for this. However, there is a concept of corner points in linear programming, and those points we can enumerate, although with much effort.
Here are some links for tools that can do this:
cdd
ppl
A different approach is to enumerate the optimal bases using extra binary variables. (You have a zero objective so this becomes effectively: enumerating all feasible LP bases). This approach makes the problem a MIP. We can enumerate this by an algorithm like:
solve mip
if infeasible: stop
add constraint to forbid current point
goto step 1
Here is a link that illustrates this approach to enumerate all optimal bases of a (continuous) LP problem.
Note that enumerating all integer solutions of a system of inequalities is easier. Many constraint programming tools will do that for you automatically. In addition we can use the "cutting plane" technique described above.
I've got a problem with my equation that I try to solve numerically using both MATLAB and Symbolic Toolbox. I'm after several source pages of MATLAB help, picked up a few tricks and tried most of them, still without satisfying result.
My goal is to solve set of three non-polynomial equations with q1, q2 and q3 angles. Those variables represent joint angles in my industrial manipulator and what I'm trying to achieve is to solve inverse kinematics of this model. My set of equations looks like this: http://imgur.com/bU6XjNP
I'm solving it with
numeric::solve([z1,z2,z3], [q1=x1..x2,q2=x3..x4,q3=x5..x6], MultiSolutions)
Changing the xn constant according to my needs. Yet I still get some odd results, the q1 var is off by approximately 0.1 rad, q2 and q3 being off by ~0.01 rad. I don't have much experience with numeric solve, so I just need information, should it supposed to look like that?
And, if not, what valid option do you suggest I should take next? Maybe transforming this equation to polynomial, maybe using a different toolbox?
Or, if trying to do this in Matlab, how can you limit your solutions when using solve()? I'm thinking of an equivalent to Symbolic Toolbox's assume() and assumeAlso.
I would be grateful for your help.
The numerical solution of a system of nonlinear equations is generally taken as an iterative minimization process involving the minimization (i.e., finding the global minimum) of the norm of the difference of left and right hand sides of the equations. For example fsolve essentially uses Newton iterations. Those methods perform a "deterministic" optimization: they start from an initial guess and then move in the unknowns space essentially according to the opposite of the gradient until the solution is not found.
You then have two kinds of issues:
Local minima: the stopping rule of the iteration is related to the gradient of the functional. When the gradient becomes small, the iterations are stopped. But the gradient can become small in correspondence to local minima, besides the desired global one. When the initial guess is far from the actual solution, then you are stucked in a false solution.
Ill-conditioning: large variations of the unknowns can be reflected into large variations of the data. So, small numerical errors on data (for example, machine rounding) can lead to large variations of the unknowns.
Due to the above problems, the solution found by your numerical algorithm will be likely to differ (even relevantly) from the actual one.
I recommend that you make a consistency test by choosing a starting guess, for example when using fsolve, very close to the actual solution and verify that your final result is accurate. Then you will discover that, by making the initial guess more far away from the actual solution, your result will be likely to show some (even large) errors. Of course, the entity of the errors depend on the nature of the system of equations. In some lucky cases, those errors could keep also very small.
I'm currently developing an application which uses the iOS enabled device camera to recognise equations from the photo and then match these up to the correct equation in a library or database - basically an equation scanner. For example you could scan an Image of the Uncertainty Principle or Schrodinger Equation and the iOS device would be able to inform the user it's name and certain feedback.
I was wondering how to implement this using Xcode, I was thinking of using an open-source framework such as Tesseract OCR or OpenCV but I'm not sure how to apply these to equations.
Any help would be greatly appreciated.
Thanks.
Here's the reason why this is super ambitious. What OCR is doing is basically taking a confined set of dots and trying to match it to one of a number of members of a very small set. What you are talking about doing is more at the idiom than the character level. For instance, if I do a representation of Bayes' Rule as an equation, I have something like:
P(A|B) = P(B|A)P(A)/P(B)
Even if it recognizes each of those characters successfully, you have to have it then patch up features in the equation to families of equations. Not to mention, this is only one representation of Bayes Rule. There are others that use Sigma Notation (LaPlace's variant), and some use logs so they don't have to special case 0s.
This, btw, could be done with Bayes. Here are a few thoughts on that:
First you would have to treat the equations as Classifications, and you would have to describe them in terms of a set of features, for instance, the presence of Sigma Notation, or the application of a log.
The System would then be trained by being shown all the equations you want it to recognize, presumably several variations of each (per above). Then these classifications would have feature distributions.
Finally, when shown a new equation, the system would have to find each of these features, and then loop through the classifications and compute the overall probability that the equation matches the given classification.
This is how 90% of spam engines are done, but there, they only have two classifications: spam and not spam, and the feature representations are ludicrously simple: merely ratios of word occurrences in different document types.
Interesting problem, surely no simple answer.
I need to find the roots from the equations as follows (Mathematica):
Sqrt[3]/2*x-(I-x*Sqrt[3]/2*c^2)*I/Sqrt[2*Pi]/d^3*Integrate[t*Exp[-t^2/2/d^2]/(Sqrt[3]/2*x+I*(t+b0)),{t,-Inf,Inf}]=0
i.e. as the picture shows:
where c, d, and b0 is given parameters, x is a complex root needs to find.
I have tried several methods, including scanning the real and imagine part of x and the iteration approach, but non of them could resolve all the cases.
Are there any general approaches that could solve such kind of equation efficiently, or by MATLAB/Mathematica?
did you try Matlab's mupad? it is a powerful symbolic tool, very similar to Maple wich gives really good results in non-numerical mathematics. Try there. declare the equation, give assumptions to the software ,i.e assume c real positive (don't copy this, I dont remember the proper syntax) and then solve! It will very likely find a solution if it exits (sometimes in some mathematical cases that you even don't know!)