Maximum packing of rectangles in a circle - matlab

I work at a nanotech lab where I do silicon wafer dicing. (The wafer saw cuts only parallel lines) We are, of course, trying to maximize the yield of the die we cut. All the of die will be equal size, either rectangular or square, and the die are all cut from a circular wafer. Essentially, I am trying to pack maximum rectangles into a circle.
I have only a pretty basic understanding of MATLAB and an intermediate understanding of calculus. Is there any (relatively) simple way to do this, or am I way over my head?

Go from here, and good luck:
http://en.wikipedia.org/wiki/Knapsack_problem
and get here:
http://www-sop.inria.fr/mascotte/WorkshopScheduling/2Dpacking.pdf
At least you'll have some idea what are you tackling here.

I was fascinated to read your question because I did a project on this for my training as a Mathematics Teacher. I'm also quite pleased to know that it's thought to be an NP-problem, because my project was leading me to the same conclusion.
By use of basic calculus, I calculated the first few 'generations' of rectangles of maximum size, but it gets complex quite quickly.
You can read my project here:
Beckett, R. Parcels of Pi: A curve-packing problem. Bath Spa MEC. 2009.
Pages 1 - 15
Pages 16 - 30
I hope that some of my findings are useful to you or at least interesting. I thought that the application of this idea would most likely be in computer nano technology.
Kind regards.

Packing arbitrary rectangles into a circle to meet a space efficiency objective is a non-convex (NP-Hard) optimization in general. This means there will be no elegant or simple solution that will solve this problem optimally. The solution methods are all going to depend on any specific domain knowledge you can use to prune the search tree or develop heuristics. If you have no experience in this type of problem you should probably consult with an expert.

doesn't this resemble the Gauss's Circle Problem? See
http://mathworld.wolfram.com/GausssCircleProblem.html
or, this can be seen as a "packaging problem"
http://en.wikipedia.org/wiki/Packing_problem#Squares_in_circle

Related

Inner workings of Google's Quick Draw

I'm asking this here because I didn't find anything online.
I would be interested in how Google Quick Draw works, specifically:
1) How does it output the answer - does it have a giant output vector with a probability for each type of drawing?
2) How does it read the data - I see they've implemented some sort of order aware input system, but does that mean that they input positions of the interpolated lines that users draw? This is problematic because it's variable length - how did they solve it?
3) And, finally, which training algorithm are they using? The data grows each time someone draws something new, or do they just feed it into the algorithm when it's created?
If you know any papers on this or by miracle you work at Google and/or can explain how it works, I would be really greatful. :)

What size tour can I reasonably expect to solve with GLPK?

I'm playing around with the travelling salesman example provided with GLPK, and trying to get a feel for what problem size I can reasonably expect to solve. I've managed to solve a 50 node graph, but 100 nodes doesn't seem to be converging in a reasonable timescale (30 minutes or so on modern hardware).
GLPK has lots of options for the MIP solver. I've tried various combinations but I'm not at all clear what options might help. This page has some discussion but is somewhat out of date and the advice is rather general.
Is it reasonable to expect GLPK to solve a 100 node tour or greater in a practical time frame (say, less than 4 hours)? When does the problem size become intractable? Are any of the many command-line options likely to help?
Are any of the many command-line options likely to help?
The tspsol command line applicaton (based on the GLPK C library) or the C API in glptsp.h are tailor-made for solving TSP.
Is it reasonable to expect GLPK to solve a 100 node tour or greater in a practical time frame (say, less than 4 hours)? When does the problem size become intractable?
My guess is that it also greatly depends on the problem instance as well. If you look into the C code, you will see that heuristics are used to generate an initial tour. How well a heuristic works, well...
I assume you know that the TSP is famous for being difficult to solve, see computational complexity.
I was able to solve eil51 in 2 seconds and rat99 in about 50 seconds by starting with a relaxation of the TSP and then eliminating sub-tours until a feasible solution is found. Doing a better modelling is going to achieve more than fiddling with the solver options. GLPK should be able to deal with bigger instances if the modelling is good enough. eil51 and rat99 are instances documented on TSPLIB.
I recommend reading some material on tsp modelling, there are a lot of articles and books out there. A nice starting point maybe 'Applied Integer Programming' by Der-San Chen et al.

Difference between cvPOSIT and cvFindExtrinsicCameraParams2

Another OpenCV question;
Without me having to implement 2 versions - can anyone enlighten me to what the differences are between cvPOSTIT and cvFindExtrinsicCameraParams2 and maybe the advantages of each.
The inputs and outputs appear to be the same.
From my experience, cvFindExtrinsicCameraParams2() works for coplanar points (so it is probably an implementation of http://dl.acm.org/citation.cfm?id=228149), while cvPOSIT() doesn't. But I am not 100% sure.
It appears that cvPOSIT() only exists in OpenCV's old C API and not in the new C++ API. Conversely, cvFindExtrinsicCameraParams2() is in both. While not a perfect indicator, my best guess is that they both implement the POSIT algorithm with minor modifications and the former exists only for legacy reasons.
Beyond that, your guess is good as mine. If you want a definitive answer, I suggest asking on the OpenCV mailing list.
I've used cvPOSIT already. It only works on 3D non-coplanar points on the object. Because it bases on the algorithm from "DAVIS, D. F. D. A. L. S. 1995. Model-Based Object Pose in 25 Lines of Code". So you will have to find a way around for coplanar features
With cvFindExtrinsicCameraParams2(), it also works on planar features, solve the transformation using cvFindHomography and then refine the result by levenberg-marquardt approximation. For non-coplanar points, the preprocessing is done by a different method DLT (Direct Linear Transformation) (not ".. 25 lines of Code" article anymore)
I'm not pretty sure about thier performance, which one is faster. As I know, ".. 25 lines of code" is very fast, and suitable for realtime vision up to now.

What is the best way to visualize abstract concepts (algorithm/data structure)?

What's the best way to "see what is happening" in an algorithm/data structure? If it's something like a binary search I just imagine a bunch of boxes in a row, and throwing half of them out each time. Is there something more powerful that will let us grok something as abstract as an algorithm/data structure?
Clarification: I'm looking for something a little more general. Example: in order to visualize time - some people use a clock in there head but thats slow, whereas a more natural feel would be a globe and if you are trying to get a 'feel' for how an algorithm works you can imagine two objects moving in different directions on that globe.
In general, animations are excellent for visualizing processes that occur over time, such as the execution of algorithms.
For example, check out these animations: Animated Sort Algorthms
Here's an animation that shows how the data structures work on MergeSort.
Now, whether you want to spend time hooking up your algorithm to some kind of animated visualization is a different question!
Algorithm animation was a big research area in the 1990s. Marc H. Brown, who was then at the Digital Systems Research Center, did a large amount of interesting work. The source code to his Zeus animation system is still available, and it would not be hard to get it set up. I played with Zeus years ago and if I remember correctly it ships with dozens of animations.
They had a couple of 'festivals' where non-experts got together to animate new algorithms. You can see one of the reports (with still images) on the 1993 festival. YouTube has one of their videos Visualizing Combinatorial Structures.
Describing something in terms of another thing is called analogy. You just did it with the binary search being a bunch of boxes. Just play with the student's prior knowledge.
For instance, trees can be thought of linked-lists, with multiple "next" nodes, or they could be explained to the uninitiated as something similar to a hierarchy.
As for concrete methods of explanation, graphs and state machines can be easily visualized with Graphviz. A very basic, directed graph can be expressed very simply:
digraph G {
A->B;
B->C;
C->D;
D->B;
}

How do I estimate tasks using function points?

What are the steps to estimating using function points?
Is there a quick-reference guide of some sort out there?
I took a conference session on Function Point Analysis a few years back. There is a lot too it. You can check out the Free Function Point Training Manual online, the Fundamentals of Function Points, or I suspect you can get a book on it at a computer store.
You might also check out the International Function Point Users Group and see if they have some resources or a local meeting for you.
You really need to get some training on it. Check with IFPUG. You will unknowingly pick up some destructive bad habits if self-taught. It also helps to have an experienced FP analyst review some of your early attempts.
It's the kind of thing that appears overwhelmingly complex until you "get it" and then it's fairly quick to do. It improved my requirements analysis a lot too. I often spot contradictions and gaps when doing a count.
It isn't limited to BDUF Waterfall projects either. I spent three years using FP and Planning Poker as cross-checks on one another when contracting agile methods projects.
I was IFPUG-certified from 2002-2005 and am still using FP analysis. I've seen it misused a lot, and I think that's why it has such a bad reputation.
I recommend you take a look at COSMIC Function points. https://cosmic-sizing.org. COSMIC Function points are also an ISO standard for measuring software size. They are an evolved improvement over IFPUG.
You can quickly estimate size by counting the entries, exits, reads and writes.
Compared with the IFPUG manual, learning COSMIC is much easier, the free book below is all you need, and you can read it in a day.
Recommended reading: https://cosmic-sizing.org/publications/measurement-guide/