Solving a triangle means finding all possible triangles when some of its sides a,b and c and angles A,B,C (A is the the angle opposite to a, and so on...) are known. This problem has 0, 1, 2 or infinitely many solutions.
I want to write a procedure to solve triangles. The user would feed the procedure with some datas amongst a,b,c,A,B,and C (if it is necessary for the sake of simplicity, you can assume that the user will avoid situations where there are infinitely many solutions) and the procedure will compute the other ones. The usual requires to use the Law of Sines or the Law of Cosines, depending on the situation.
Since it is for a Maths class where I also want to show graphs of functions, I will implement it in Maple. If Maple is not suitable for your answer, please suggest another language (I am reasonably competent in Java and beginner in Python for example).
My naive idea is to use conditional instructions if...then...else to determine the case in hand but it is a little bit boring. Java has a switch that could make things shorter and clearer, but I am hoping for a smarter structure.
Hence my question: Assume that some variables are related by known relations. Is there a simple and clear way to organize a procedure to determine missing variables when only some values are given?
PS: not sure on how I should tag this question. Any suggestion is welcome.
One approach could be to make all of the arguments to your procedure optional with default values that correspond to the names: A, B, C, a, b, c.
Since we can make the assumption that all missing variables are those that are not of type 'numeric', it is easy for us to then quickly determine which variables do not yet have values and give those as the values to a solve command that finds the remaining sides or angles.
Something like the following could be a good start:
trisolve := proc( { side1::{positive,symbol} := A, side2::{positive,symbol} := B, side3::{positive,symbol} := C,
angle1::{positive,symbol} := a, angle2::{positive,symbol} := b, angle3::{positive,symbol} := c } )
local missing := remove( hastype, [ side1, side2, side3, angle1, angle2, angle3 ], numeric );
return solve( { 180 = angle1 + angle2 + angle3,
side1/sin(angle1*Pi/180)=side2/sin(angle2*Pi/180),
side1/sin(angle1*Pi/180)=side3/sin(angle3*Pi/180),
side2/sin(angle2*Pi/180)=side3/sin(angle3*Pi/180),
side1^2=side2^2+side3^2-2*side2*side3*cos(angle1) },
missing );
end proc:
The following call:
trisolve( side1 = 1, angle1 = 90, angle2 = 45 );
returns:
[B = (1/2)*sqrt(2), C = (1/2)*sqrt(2), c = 45]
Related
A little bit of background. I'm trying to make a model for clustering a Design Structure Matrix(DSM). I made a draft model and have a couple of questions. Most of them are not directly related to DSM per se.
include "globals.mzn";
int: dsmSize = 7;
int: maxClusterSize = 7;
int: maxClusters = 4;
int: powcc = 2;
enum dsmElements = {A, B, C, D, E, F,G};
array[dsmElements, dsmElements] of int: dsm =
[|1,1,0,0,1,1,0
|0,1,0,1,0,0,1
|0,1,1,1,0,0,1
|0,1,1,1,1,0,1
|0,0,0,1,1,1,0
|1,0,0,0,1,1,0
|0,1,1,1,0,0,1|];
array[1..maxClusters] of var set of dsmElements: clusters;
array[1..maxClusters] of var int: clusterCard;
constraint forall(i in 1..maxClusters)(
clusterCard[i] = pow(card(clusters[i]), powcc)
);
% #1
% constraint forall(i, j in clusters where i != j)(card(i intersect j) == 0);
% #2
constraint forall(i, j in 1..maxClusters where i != j)(
card(clusters[i] intersect clusters[j]) == 0
);
% #3
% constraint all_different([i | i in clusters]);
constraint (clusters[1] union clusters[2] union clusters[3] union clusters[4]) = dsmElements;
var int: intraCost = sum(i in 1..maxClusters, j, k in clusters[i] where k != j)(
(dsm[j,k] + dsm[k,j]) * clusterCard[i]
) ;
var int: extraCost = sum(el in dsmElements,
c in clusters where card(c intersect {el}) = 0,
k,j in c)(
(dsm[j,k] + dsm[k,j]) * pow(card(dsmElements), powcc)
);
var int: TCC = trace("\(intraCost), \(extraCost)\n", intraCost+extraCost);
solve maximize TCC;
Question 1
I was under the impression, that constraints #1 and #2 are the same. However, seems like they are not. The question here is why? What is the difference?
Question 2
How can I replace constraint #2 with all_different? Does it make sense?
Question 3
Why the trace("\(intraCost), \(extraCost)\n", intraCost+extraCost); shows nothing in the output? The output I see using gecode is:
Running dsm.mzn
intraCost, extraCost
clusters = array1d(1..4, [{A, B, C, D, E, F, G}, {}, {}, {}]);
clusterCard = array1d(1..4, [49, 0, 0, 0]);
----------
<sipped to save space>
----------
clusters = array1d(1..4, [{B, C, D, G}, {A, E, F}, {}, {}]);
clusterCard = array1d(1..4, [16, 9, 0, 0]);
----------
==========
Finished in 5s 419msec
Question 4
The expression constraint (clusters[1] union clusters[2] union clusters[3] union clusters[4]) = dsmElements;, here I wanted to say that the union of all clusters should match the set of all nodes. Unfortunately, I did not find a way to make this big union more dynamic, so for now I just manually provide all clusters. Is there a way to make this expression return union of all sets from the array of sets?
Question 5
Basically, if I understand it correctly, for example from here, the Intra-cluster cost is the sum of all interactions within a cluster multiplied by the size of the cluster in some power, basically the cardinality of the set of nodes, that represents the cluster.
The Extra-cluster cost is a sum of interactions between some random element that does not belong to a cluster and all elements of that cluster multiplied by the cardinality of the whole space of nodes to some power.
The main question here is are the intraCost and extraCost I the model correct (they seem to be but still), and is there a better way to express these sums?
Thanks!
(Perhaps you would get more answers if you separate this into multiple questions.)
Question 3:
Here's an answer on the trace question:
When running the model, the trace actually shows this:
intraCost, extraCost
which is not what you expect, of course. Trace is in effect when creating the model, but at that stage there is no value of these two decision values and MiniZinc shows only the variable names. They got some values to show after the (first) solution is reached, and can then be shown in the output section.
trace is mostly used to see what's happening in loops where one can trace the (fixed) loop variables etc.
If you trace an array of decision variables then they will be represented in a different fashion, the array x will be shown as X_INTRODUCED_0_ etc.
And you can also use trace for domain reflection, e.g. using lb and ub to get the lower/upper value of the domain of a variable ("safe approximation of the bounds" as the documentation states it: https://www.minizinc.org/doc-2.5.5/en/predicates.html?highlight=ub_array). Here's an example which shows the domain of the intraCost variable:
constraint
trace("intraCost: \(lb(intraCost))..\(ub(intraCost))\n")
;
which shows
intraCost: -infinity..infinity
You can read a little more about trace here https://www.minizinc.org/doc-2.5.5/en/efficient.html?highlight=trace .
Update Answer to question 1, 2 and 4.
The constraint #1 and #2 means the same thing, i.e. that the elements in clusters should be disjoint. The #1 constraint is a little different in that it loops over decision variables while the #2 constraint use plain indices. One can guess that #2 is faster since #1 use the where i != j which must be translated to some extra constraints. (And using i < j instead should be a little faster.)
The all_different constraint states about the same and depending on the underlying solver it might be faster if it's translated to an efficient algorithm in the solver.
In the model there is also the following constraint which states that all elements must be used:
constraint (clusters[1] union clusters[2] union clusters[3] union clusters[4]) = dsmElements;
Apart from efficiency, all these constraints above can be replaced with one single constraint: partition_set which ensure that all elements in dsmElements must be used in clusters.
constraint partition_set(clusters,dsmElements);
It might be faster to also combine with the all_different constraint, but that has to be tested.
I have a set, S = { 1, 2, 3, 4, 5 }.
If I wanted to sum this in standard logic it's just ∑S (no MathJax on SO so I can't format this nicely).
What's the VDM equivalent? I don't see anything in the numerics/sets section of the language reference.
There isn't a standard library function to do this (though perhaps there should be). You would sum a set with a simple recursive function:
sum: set of nat +> nat
sum(s) ==
if s = {}
then 0
else let e in set s in
e + sum(s \ {e})
measure card s;
The "let" selects an arbitrary element from the set, and then add that to the sum of the remainder. The measure says that the recursion always deals with smaller sets.
This should work:
sum(S)
But you could find this very easily.
I understand how the GCD works on a trivial example as below:
for(i=1; i<=100; i++)
{
X[2*i+3] = X[2*i] + 50;
}
we first transform it into the following form:
X[a*i + b] and X[c*i + d]
a=2, b=3, c=2, d=0 and GCD(a,c)=2 and (d-b) is -3. Since 2 does not divide -3, no dependence is possible.
But how can we do this GCD test on a doubly nested loop?
For example:
for (i=0; i<10; i++){
for (j=0; j<10; j++){
A[1+2*i + 20*j] = A[2+20*i + 2*j);
}
}
While the subscripts can be delinearized, the GCD test is simple to apply directly. In your example, the subscript pair is [1+2*i + 20*j] and [2+20*i + 2*j], so we're looking for an integer solution to the equation
1 + 2*i + 20*j = 2 + 20*i' + 2*j'
Rearranging, we get
2*i - 20*i' + 20*j - 2*j = 1
Compute the GCD of all the coefficients, 2, -20, 20, and -2, and see if it divides the constant. In this case, the GCD is 2. Since 2 doesn't divide 1, there's no dependence.
The "easy" way to apply GCD in the nested loop case is to apply it only in cases where the arrays themselves are multidemsional; i.e., the original source code uses multiple subscripts rather than already linearized expressions. Of course if you can "back transform" these linearized subscripts then you'll have the equivalent.
Once you've cast the problem as a multidemsional problem then you may simply apply the GCD test "dimension by dimension". If any dimension shows no dependence then you can stop and declare there is no dependence for the entire multidemsional subscripting sequence.
The key of course is that casting as a multidimensional indexing problem gives you the nice property that there's a one-to-one mapping between individual index values and the corresponding index expression tuples. Without this the problem is harder.
This is the approach I took in the ASC Fortran vectorizing compiler back in the 70's and it worked pretty well, particularly used in conjunction with directional subscript analysis for the non disjoint case. The GCD test by itself is really not sufficient, but it does give you a relatively inexpensive way of making an early decision in your analysis in those cases where you then can avoid the more expensive dependence analysis.
hey guys i want your suggestion that how can change value of two variables without 3rd one. in objective cc.
is there any way so please inform me,
it can be done in any language. x and y are 2 variables and we want to swap them
{
//lets say x , y are 1 ,2
x = x + y; // 1+2 =3
y = x - y; // 3 -2 = 1
x = x -y; // 3-1 = 2;
}
you can use these equation in any language to achieve this
Do you mean exchange the value of two variables, as in the XOR swap algorithm? Unless you're trying to answer a pointless interview question, programming in assembly language, or competing in the IOCCC, don't bother. A good optimizing compiler will probably handle the standard tmp = a; a = b; b = tmp; better than whatever trick you might come up with.
If you are doing one of those things (or are just curious), see the Wikipedia article for more info.
As far as number is concerned you can swap numbers in any language without using the third one whether it's java, objective-C OR C/C++,
For more info
Potential Problem in "Swapping values of two variables without using a third variable"
Since this is explicitly for iPhone, you can use the ARM instruction SWP, but it's almost inconceivable why you'd want to. The complier is much, much better at this kind of optimization. If you just want to avoid the temporary variable in code, write an inline function to handle it. The compiler will optimize it away if it can be done more efficiently.
NSString * first = #"bharath";
NSString * second = #"raj";
first = [NSString stringWithFormat:#"%#%#",first,second];
NSRange needleRange = NSMakeRange(0,
first.length - second.length);
second = [first substringWithRange:needleRange];
first = [first substringFromIndex:second.length];
NSLog(#"first---> %#, Second---> %#",first,second);
The problem in general:
I have a big 2d point space, sparsely populated with dots.
Think of it as a big white canvas sprinkled with black dots.
I have to iterate over and search through these dots a lot.
The Canvas (point space) can be huge, bordering on the limits
of int and its size is unknown before setting points in there.
That brought me to the idea of hashing:
Ideal:
I need a hash function taking a 2D point, returning a unique uint32.
So that no collisions can occur. You can assume that the number of
dots on the Canvas is easily countable by uint32.
IMPORTANT: It is impossible to know the size of the canvas beforehand
(it may even change),
so things like
canvaswidth * y + x
are sadly out of the question.
I also tried a very naive
abs(x) + abs(y)
but that produces too many collisions.
Compromise:
A hash function that provides keys with a very low probability of collision.
Cantor's enumeration of pairs
n = ((x + y)*(x + y + 1)/2) + y
might be interesting, as it's closest to your original canvaswidth * y + x but will work for any x or y. But for a real world int32 hash, rather than a mapping of pairs of integers to integers, you're probably better off with a bit manipulation such as Bob Jenkin's mix and calling that with x,y and a salt.
a hash function that is GUARANTEED collision-free is not a hash function :)
Instead of using a hash function, you could consider using binary space partition trees (BSPs) or XY-trees (closely related).
If you want to hash two uint32's into one uint32, do not use things like Y & 0xFFFF because that discards half of the bits. Do something like
(x * 0x1f1f1f1f) ^ y
(you need to transform one of the variables first to make sure the hash function is not commutative)
Like Emil, but handles 16-bit overflows in x in a way that produces fewer collisions, and takes fewer instructions to compute:
hash = ( y << 16 ) ^ x;
You can recursively divide your XY plane into cells, then divide these cells into sub-cells, etc.
Gustavo Niemeyer invented in 2008 his Geohash geocoding system.
Amazon's open source Geo Library computes the hash for any longitude-latitude coordinate. The resulting Geohash value is a 63 bit number. The probability of collision depends of the hash's resolution: if two objects are closer than the intrinsic resolution, the calculated hash will be identical.
Read more:
https://en.wikipedia.org/wiki/Geohash
https://aws.amazon.com/fr/blogs/mobile/geo-library-for-amazon-dynamodb-part-1-table-structure/
https://github.com/awslabs/dynamodb-geo
Your "ideal" is impossible.
You want a mapping (x, y) -> i where x, y, and i are all 32-bit quantities, which is guaranteed not to generate duplicate values of i.
Here's why: suppose there is a function hash() so that hash(x, y) gives different integer values. There are 2^32 (about 4 billion) values for x, and 2^32 values of y. So hash(x, y) has 2^64 (about 16 million trillion) possible results. But there are only 2^32 possible values in a 32-bit int, so the result of hash() won't fit in a 32-bit int.
See also http://en.wikipedia.org/wiki/Counting_argument
Generally, you should always design your data structures to deal with collisions. (Unless your hashes are very long (at least 128 bit), very good (use cryptographic hash functions), and you're feeling lucky).
Perhaps?
hash = ((y & 0xFFFF) << 16) | (x & 0xFFFF);
Works as long as x and y can be stored as 16 bit integers. No idea about how many collisions this causes for larger integers, though. One idea might be to still use this scheme but combine it with a compression scheme, such as taking the modulus of 2^16.
If you can do a = ((y & 0xffff) << 16) | (x & 0xffff) then you could afterward apply a reversible 32-bit mix to a, such as Thomas Wang's
uint32_t hash( uint32_t a)
a = (a ^ 61) ^ (a >> 16);
a = a + (a << 3);
a = a ^ (a >> 4);
a = a * 0x27d4eb2d;
a = a ^ (a >> 15);
return a;
}
That way you get a random-looking result rather than high bits from one dimension and low bits from the other.
You can do
a >= b ? a * a + a + b : a + b * b
taken from here.
That works for points in positive plane. If your coordinates can be in negative axis too, then you will have to do:
A = a >= 0 ? 2 * a : -2 * a - 1;
B = b >= 0 ? 2 * b : -2 * b - 1;
A >= B ? A * A + A + B : A + B * B;
But to restrict the output to uint you will have to keep an upper bound for your inputs. and if so, then it turns out that you know the bounds. In other words in programming its impractical to write a function without having an idea on the integer type your inputs and output can be and if so there definitely will be a lower bound and upper bound for every integer type.
public uint GetHashCode(whatever a, whatever b)
{
if (a > ushort.MaxValue || b > ushort.MaxValue ||
a < ushort.MinValue || b < ushort.MinValue)
{
throw new ArgumentOutOfRangeException();
}
return (uint)(a * short.MaxValue + b); //very good space/speed efficiency
//or whatever your function is.
}
If you want output to be strictly uint for unknown range of inputs, then there will be reasonable amount of collisions depending upon that range. What I would suggest is to have a function that can overflow but unchecked. Emil's solution is great, in C#:
return unchecked((uint)((a & 0xffff) << 16 | (b & 0xffff)));
See Mapping two integers to one, in a unique and deterministic way for a plethora of options..
According to your use case, it might be possible to use a Quadtree and replace points with the string of branch names. It is actually a sparse representation for points and will need a custom Quadtree structure that extends the canvas by adding branches when you add points off the canvas but it avoids collisions and you'll have benefits like quick nearest neighbor searches.
If you're already using languages or platforms that all objects (even primitive ones like integers) has built-in hash functions implemented (Java platform Languages like Java, .NET platform languages like C#. And others like Python, Ruby, etc ).
You may use built-in hashing values as a building block and add your "hashing flavor" in to the mix. Like:
// C# code snippet
public class SomeVerySimplePoint {
public int X;
public int Y;
public override int GetHashCode() {
return ( Y.GetHashCode() << 16 ) ^ X.GetHashCode();
}
}
And also having test cases like "predefined million point set" running against each possible hash generating algorithm comparison for different aspects like, computation time, memory required, key collision count, and edge cases (too big or too small values) may be handy.
the Fibonacci hash works very well for integer pairs
multiplier 0x9E3779B9
other word sizes 1/phi = (sqrt(5)-1)/2 * 2^w round to odd
a1 + a2*multiplier
this will give very different values for close together pairs
I do not know about the result with all pairs