Transpose Coordinates to UTM84-17N Projection - swift

In Swift, is there a way to input Lat/Long coordinates from MapKit, and translate those coordinates to a specified Projection, such as UTM84-17N (e.g. AutoCAD format)?

Well, I never did figure out how to do this using iOS/Swift, but I did discover a lisp code in both AutoCAD Map (two or three lines of code) and even better, in BricsCAD using the Spatial Manager extension (One single line of code).
For both, they involve three parameters: the source coordinates (in my case are in lat/long) the source projection code and the destination projection code.
The projection codes lists are provided in 'help.'
Actually, the company who owns Spatial Manager actually wrote the routine to fit into lisp for me when I posed the question to them; amazing!
So, there might be a manageable method that can be written for any platform.., who knows.
If anyone is interested in those lines of code to work in CAD, I'll look them up and post them.
By the way, a plug for BricsCAD: AutoCAD MAP from Autodesk, is now like almost $10,000 Canadian; BricsCAD is far less than $1000 for their best Pro option. NO BRAINER. AND, I find after being a guru in AutoCAD scripting for years, that BricsCAD is way better! AutoCAD is now so full of bloatware trinkets and useless stuff (similar to Windows' verification window pop ups) that it takes about 30 seconds just to load on a fast rig. And they have made things so much more difficult for old school users. Not to mention, the folks at BricsCAD are so incredibly helpful you feel like family with them!
...only one tiny flaw so far in going with them tho. They don't have a proper lisp editor (that I can find anyway), that lets you test the code line by line real time - like AutoCAD does...

Related

Programming an intelligent game-playing bot

I am taking part in a programming competition where the objective is writing a bot that can play a specific game.
The objective of the game is to earn a certain amount of points. You control multiple airships, that you move around, capture islands and navigate drones that carry treasure. You play against one opponent, turns happen simultaneously, and there is a time limit. You can move multiple ships and drones in one turn. You can program your bot in Python, Java or C#.
The exact details don‘t matter, just that each ship has around 15 options each turn (moving and shooting) and overall you have around 10000 different options for each turn (different configurations of airship movements and shooting)
Up until now I approached this competition naively, and haven‘t done anything exceptionally clever (for example, if near enemy, shoot). I have read about minimax algorithms, and I would really like to apply it here (or something similar), you can assume that I can tell the value of a state. My problem is the mass of options for each turn - which create an enourmous branching factor that doesnt let me get very deep.
Question 1: Is there a better, applicable approach to this problem? Perhaps deep-learning or something similar?
Question 2: Is there a way to minimize the branching factor? I`ve read about alpha-beta and similar algorithms, but nothing seems to do the job.
Any help would be much appreciated
The minimax algorithm seems to be natural for these kinds of problems. At first, the game will be modelled in a abstract way and then a solver is used to find the path from current situation to a gamestate which maximize the amount of points. A similar approach to minimax is GOAP, which was implemented in the 1970'er for Shakey the robot under the name STRIPS. But, GOAP and minimax has two problems: first, a abstract model of the game is needed (perhaps in PDDL or in Game Description Language) and second the state-space is to big.
An better alternative to planning is to use a Behavior Tree. Thats a static program which describes the behavior of an agent. No solver is needed and no complete modelling of the game is needed. Instead, a bottom up approach is used with multiple edit-compile-run iterations for finding the optimal behavior tree (Test-driven-development). To implement such programming approach a so called "reactive planner" has to be implemented first which is another word for a realtime scheduler. Thats a module whichs maps a behavior tree onto a gantt-chart for executing an action at a specific moment in time. As introduction, the unity3d Engine is a good starting point, which has a full behaviortree implementation out-of-the-box.

Should I use Object Oriented Programming in MATLAB?

I have an issue, where I need to handle a lot of figures in matlab and the code is starting to get messy. Different kinds of plot objects are added to the code in different stages and some have legends and some does not. The problem is that there is no NULL legends. As soon as an object is created, so is a legend. However, until the legend(handles,...) is called they are not shown. This means that if things are plotted and some, need a legend entry and some not, a lot of handles needs to be passed around.
Now, the file is starting to be quite long, about 1500 lines, with some globals that spans over many functions in the file and so. To prevent the "Do not use globals" comments to pour in, yes I know globals are normally unnecessary, but the code was like that when I laid my hands on it. However, now the code is getting more and more messy and I think about using Object Oriented Programming (OOP) to handle figures.
The idea is to have the custom figure objects handling themselves and thus make more readable code, split up in smaller blocks. The idea is to have a design like
class Figure
private:
MainFrame;
SubFrame;
Lines;
Legends;
Title;
X-Label;
Y-Label;
Methods:
To be defined, for example formatting plotting, edit title,…
The complete design is not really thought through completely, but the point of this questions is really about using OOP in matlab. What I have seen so far it os not really used were much. Are there a reason for this? Could anyone give pros and cons to OOP in matlab? Is OOP recommended or not in matlab?
I have added the information about my issue since I understand that OOP is more needed for large complex issues, so an answer would preferably take the drawbacks in comparison with the complexity of the problem into account. (For example, do never use OOP in matlab, do it only when you have complex problems, do it whenever you like,...)
Okay the question is about OOP in Matlab - but is it not OOP in Matlab in your organisation?
By that I mean to think who is going to use/develop and maintain the code going forward.
Background: I have used OOP for my own toolbox (because its complex/large enough to warrant it - and I develop/maintain it) - however in consultancy jobs for the majority of my clients I create functions (which in some instances call my toolbox) - because when the job is finished they get the source code and the majority are (much) more comfortable working with functions rather than classes.
In summary - I decide on whether to use OOP on the job specifics and the situation where the code will be used (developed & maintained) in the future.
So back to your topic - I would consider where you think the code is going to go and who will develop/maintain it. Will they be comfortable with classes - or will they be more comfortable with functions?
FYI: Last year I was talking to Mathworks and they said that they run multiple "Intro to Matlab" courses per week - but only 1 "Matlab Classes" per quarter!! That gives you an indication on the level of Matlab class use in industry.

Cellular Automata in Matlab

I'm currently self-teaching myself matlab, and I'm interested in cellular automata that was exhibited in old programs like Wolfram's Life1D and Conway's Game of Life from the early 1980s. Is there any available code that would produce Wolfram's Life1D in matlab in some form? I've searched online but have not found anything. Thanks.
As the comments have pointed out, the MATLAB file exchange is the place to start your search:
http://www.mathworks.com/matlabcentral/fileexchange/
Five minutes of poking around already gives several promising links, the first of which implements Life1d.
http://www.mathworks.com/matlabcentral/fileexchange/26929-elementary-cellular-automata
http://www.mathworks.com/matlabcentral/fileexchange/27233-conway-game-of-life
http://www.mathworks.com/matlabcentral/fileexchange/4892-conways-game-of-life-in-3d
For enough details to make your own Life1d implementation you can check out:
http://www-inst.eecs.berkeley.edu/~cs61c/su08/assignments/hw/02/index.html
http://mathworld.wolfram.com/ElementaryCellularAutomaton.html
The trickiest part will probably be plotting the results efficiently. Again, the MATLAB file exchange would be a good place to look for helper classes.

Using MATLAB's plotting features as an interactive part of a Fortran program

Although many of you will have a decent idea of what I'm aiming at, just from reading the title -- allow me a simple introduction still.
I have a Fortran program - it consists of a program, some internal subroutines, 7 modules with its own procedures, and ... uhmm, that's it.
Without going into much detail, for I don't think it's necessary at this point, what would be the easiest way to use MATLAB's plotting features (mainly plot(x,y) with some customizations) as an interactive part of my program ? For now I'm using some of my own custom plotting routines (based on HPGL and Calcomp's routines), but just as part of an exercise on my part, I'd like to see where this could go and how would it work (is it even possible what I'm suggesting?). Also, how much effort would it take on my part ?
I know this subject has been rather extensively described in many "tutorials" on the net, but for some reason I have trouble finding the really simple yet illustrative introductory ones. So if anyone can post an example or two, simple ones, I'd be really grateful. Or just take me by the hand and guide me through one working example.
platform: IVF 11.something :) on Win XP SP2, Matlab 2008b
The easiest way would be to have your Fortran program write to file, and have your Matlab program read those files for the information you want to plot. I do most of my number-crunching on Linux, so I'm not entirely sure how Windows handles one process writing a file and another reading it at the same time.
That's a bit of a kludge though, so you might want to think about using Matlab to call the Fortran program (or parts of it) and get data directly for plotting. In this case you'll want to investigate Creating Fortran MEX Files in the Matlab documentation. This is relatively straightforward to do and would serve your needs if you were happy to use Matlab to drive the process and Fortran to act as a compute service. I'd look in the examples distributed with Matlab for simple Fortran MEX files.
Finally, you could call Matlab from your Fortran program, search the documentation for Calling the Matlab Engine. It's a little more difficult for me to see how this might fit your needs, and it's not something I'm terribly familiar with.
If you post again with more detail I may be able to provide more specific tips, but you should probably start rolling your sleeves up and diving in to MEX files.
Continuing the discussion of DISLIN as a solution, with an answer that won't fit into a comment...
#M. S. B. - hello. I apologize for writing in your answer, but these comments are much too short, and answering a question in the form of an answer with an answer is ... anyway ...
There is the Quick Plot feature of DISLIN -- routine QPLOT needs only three arguments to plot a curve: X array, Y array and number N. See Chapter 16 of the manual. Plus only several additional calls to select output device and label the axes. I haven't used this, so I don't know how good the auto-scaling is.
Yes, I know of Quickplot, and it's related routines, but it is too fixed for my needs (cannot change anything), and yes, it's autoscaling is somewhat quircky. Also, too big margins inside the graf.
Or if you want to use the power of GRAF to setup your graph box, there is subroutine GAXPAR to automatically generate recommended values. -2 as the first argument to LABDIG automatically determines the number of digits in tick-mark labels.
Have you tried the routines?
Sorry, I cannot find the GAXPAR routine you're reffering to in dislin's index. Are you sure it is called exactly like that ?
Reply by M.S.B.: Yes, I am sure about the spelling of GAXPAR. It is the last routine in Chapter 4 of the DISLIN 9.5 PDF manual. Perhaps it is a new routine? Also there is another path to automatic scaling: SETSCL -- see Chapter 6.
So far, what I've been doing (apart from some "duck tape" solutions) is
use dislin; implicit none
real, dimension(5) :: &
x = [.5, 2., 3., 4., 5.], &
y = [10., 22., 34., 43., 15.]
real :: xa, xe, xor, xstp, &
ya, ye, yor, ystp
call setpag('da4p'); call metafl('xwin');
call disini(); call winkey('return');
call setscl(x,size(x),'x');
call setscl(y,size(y),'y')
call axslen(1680,2376) !(8/10)*2100 and 2970, respectively
call setgrf('name','name','line','line')
call incmrk(1); call hsymbl(3);
call graf(xa, xe, xor, xstp, ya, ye, yor, ystp); call curve(x,y,size(x))
call disfin()
end
which will put the extreme values right on the axis. Do you know perhaps how could I go to have one "major tick margin" on the outside, as to put some area between the curve and the axis (while still keeping setscl's effects) ?
Even if you don't like the built-in auto-scaling, if you are already using DISLIN, rolling your own auto-scaling will be easier than calling Fortran from MATLAB. You can use the Fortran intrinsic functions minval and maxval to find the smallest and largest values in the data, than write a subroutine to round outwards to "nice" round values. Similarly, a subroutine to decide on the tick-mark spacing.
This is actually not so easy to accomplish (and ideas to prove me wrong will be gladly appreciated). Or should I say, it is easy if you know the rough range in which your values will lie. But if you don't, and you don't know
whether your values will lie in the range of 13-34 or in the 1330-3440, then ...
... if I'm on the wrong track completely here, please, explain if you ment something different. My english is somewhat lacking, so I can only hope the above is understandable.
Inside a subroutine to determine round graph start/end values, you could scale the actual min/max values to always be between 1 and 10, then have a table to pick nice round values, then unscale back to the correct range.
--
Dump Matlab because its proprietary, expensive, bloated/slow and codes are not easy to parallelize.
What you should do is use something on the lines of DISLIN, PLplot, GINO, gnuplotfortran etc.

What is the best way to visualize abstract concepts (algorithm/data structure)?

What's the best way to "see what is happening" in an algorithm/data structure? If it's something like a binary search I just imagine a bunch of boxes in a row, and throwing half of them out each time. Is there something more powerful that will let us grok something as abstract as an algorithm/data structure?
Clarification: I'm looking for something a little more general. Example: in order to visualize time - some people use a clock in there head but thats slow, whereas a more natural feel would be a globe and if you are trying to get a 'feel' for how an algorithm works you can imagine two objects moving in different directions on that globe.
In general, animations are excellent for visualizing processes that occur over time, such as the execution of algorithms.
For example, check out these animations: Animated Sort Algorthms
Here's an animation that shows how the data structures work on MergeSort.
Now, whether you want to spend time hooking up your algorithm to some kind of animated visualization is a different question!
Algorithm animation was a big research area in the 1990s. Marc H. Brown, who was then at the Digital Systems Research Center, did a large amount of interesting work. The source code to his Zeus animation system is still available, and it would not be hard to get it set up. I played with Zeus years ago and if I remember correctly it ships with dozens of animations.
They had a couple of 'festivals' where non-experts got together to animate new algorithms. You can see one of the reports (with still images) on the 1993 festival. YouTube has one of their videos Visualizing Combinatorial Structures.
Describing something in terms of another thing is called analogy. You just did it with the binary search being a bunch of boxes. Just play with the student's prior knowledge.
For instance, trees can be thought of linked-lists, with multiple "next" nodes, or they could be explained to the uninitiated as something similar to a hierarchy.
As for concrete methods of explanation, graphs and state machines can be easily visualized with Graphviz. A very basic, directed graph can be expressed very simply:
digraph G {
A->B;
B->C;
C->D;
D->B;
}