What are the mechanisms and idioms of code re-use in OpenSCAD? [closed] - openscad

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 9 months ago.
This post was edited and submitted for review 9 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I designed few simple 3D parts with OpenSCAD and I would like to move on to more complex parts now. As in most other programming languages, that would naturally include starting to re-use code that others have written before. Such as functions for round/bevel edges, infill corners, beziers curves and some common parts like screws, bolts.
How does that work in OpenSCAD? Specifically: What are the language features, idioms and officially recommended good practices of how code reuse is achieved in OpenSCAD?
(You are welcome to include pointers to good examples. But the question is about the mechanisms and good practices for code reuse in OpenSCAD, not about specific code that can be reused.)

Reusable code in OpenSCAD is organized in "libraries", similar to the package or library system in many other languages.
As with all code reuse, there is the problem of library scope overlap, where two libraries solve the same issue. This cannot be truly solved. But as a best practice, I would choose the one most appropriate library for each design project, and then stick with whatever that library has to offer. For example, don't depend on both BOSL and NopSCADlib because you like BOSL for everything except its threading functions, which you like better in NopSCADlib. For a small project, I like to work with only Round Anything, which is small and compact.
To help you get started with choosing a library appropriate for your project, I include a list of examples below that I thing show good practices of reusable OpenSCAD code. I had a long look at OpenSCAD libraries recently and this is the result. Most of them come from the official OpenSCAD libraries page, which I found to recommend only a few but very good libraries.
My favourite libraries, roughly in my personal order of desirability:
BOSL (source, docs) and BOSL2. (source, docs) "The Belfry OpenScad Library - A library of tools, shapes, and helpers to make OpenScad easier to use." Includes lots of modules and functions to make OpenSCAD code more readable. Overall, it's like MCAD in scope, but much better in execution. BOSL2 is a much extended second edition of BOSL, but as of 2020-11 the author says it is not yet ready for production use.
BOSL includes a very good Bezier library. Includes a threading library.
Round Anything. (source, API, visual overview) "Round-Anything is primarily a set of OpenSCAD utilities that help with rounding parts, but it also embodies a robust approach to developing OpenSCAD parts."
NopSCADlib. (source, docs) A very large library. Use for any kind of machine design, as it contains nuts, bolts, washers, electronic components, belts etc.. "It contains lots of vitamins (the RepRap term for non-printed parts), some general purpose printed parts and some utilities. There are also Python scripts to generate Bills of Materials (BOMs), STL files for all the printed parts, DXF files for CNC routed parts in a project and a manual containing assembly instructions and exploded views by scraping markdown embedded in OpenSCAD comments, see scripts."
Also contains a 3D sweep function and a thread generation module.
BOLTS. (source, docs) "BOLTS is an Open Library of Technical Specifications." Contains all kinds of models for metal hardware standard parts (example).
dotSCAD. (source) Seems to be one of the best general library for OpenSCAD, being both huge, good quality, and well maintained. Mostly focused on math art parts. For an overview of the designs made by the author of dotSCAD, using that library, see here. For background articles about the designs made with dotSCAD, see here.
MCAD. (source, docs) This is so far the only library shipped with every installation of OpenSCAD, so would qualify as its standard library. No need to tell users of your designs to install anything when you only include MCAD.
Note that currently (as of 2020-11), a large rework is being done to MCAD, with the effect that the dev branch has nearly twice the commits as the master branch. You'll find many goodies here, but of course users of your design would then have to install the dev branch first.
The problem with MCAD, esp. the current master branch, is that I don't find it useful. It's so far a rather a non-integrated hotchpotch of contributions from many authors. But since it's the standard library, we should give it a chance. When I have something to generally useful, I'd try to contribute it here.
Revolve2. (source, announcement) In terms of speed, this is hands-down the best thread generation library I could find. I did not yet test the threading features in BOSL and NopSCADlib, though.
3D sweep demos. (source) But note that this is rather demo code than a library; first try the sweep module in NopSCADlib.
scad-utils. (source)
Relativity. (source) A library to arrange objects relative to each other. Also includes a CSS-like styling language for objects. Seemingly no longer in active development, but still great to learn really advanced OpenSCAD techniques.

After some researches, it seems that the BOSL2 library is the most complete :
BOSL2 Library documentation
BOSL2 Library
[edit 2022: update BOSL to BOSL2]

Related

UML Tool for Moose Perl [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Im starting working with Moose/Perl and im searching for an UML Tool to create diagrams and representing the Moose OO System. I already worked with Astah (former Jude) but it is designed for Java OO System. Can someone give recomend other UML tool to work with Moose/Perl?
My two cents:
I have written an extension (a .xom file) for Sybase PowerDesigner. This tool has a powerful metaclass editor that you can script with vbscript and a proprietary language, GTL. It also has large collections of customizable metaclasses and templates.
My PowerDesigner Extension is quite hacky and contains some stale code that I didnt clean up. Therefore I haven't published anything. It works for me, and only for me. Some lessons learned, from the top of my head:
I wanted to do UML modeling and code-generation, do you want to do that, too?
Moose is quite attribute-heavy so a UML approach is worth doing in this respect.
Didn't use roles much, but I tried to map them to interfaces anyway.
I am not satisfied with how to model relationships. Lots of edge-cases and "impedance mismatches" of UML concepts and moose/perl concepts. (BTW, Whats the moose equivalent of an "association class"? )
Native traits are a nice feature in Moose but I haven't succeeded in creating a GUI for editing them
I also hurt my brain by designing a comprehensible GUI for type coercions (I often need to check + coerce date values)
Static attributes are an important feature in UML but less important in moose. The problem is that there is no "static" keyword in perl/moose, but you have to declare a "use MooseX::ClassAttribute" or whatever it is called, and do it only once per class, but in the right place (order matters)
the code generated is impossible to pretty-print, so usually I send it through perltidy right away, to bring it to a "canonical" form, making diffing and versioning / committing to SVN easier.
When a class is generated , the compactness of Moose class is gone, you'll have svn properties, header comments, lots of "use" + "use lib" statements, lots of POD, some comment lines after each sub declaration with parameter-passing doc, the the obligatory footer ("no moose ....")-
unfortunately, reverse engineering Perl code (to update a UML model from code) is impossible. Thus, at some point I must stop working in the UML tool, and start to edit the perl code directly, abandonig the model. Checking back in these changes must be done manually at some later time, is very time consuming and requires care.
Advantages:
Generating properly POD-documented code is the main productivity gain you'd get by doing all this UML modeling, IMHO. Good for "enterprisey" programming environments.
you can autogenerate *.t files with testcases (or stubs of testcases). Requires some thinking to design smart tests, and to avoid the problems Dave Rolsky has written about in this blog post: "add(ing) absolutely nothing that isn't already tested by Moose itself"
You can define custom checks in the model such as "check if builder methods for all declared attributes exist, and if they don't exist, create a stub, or (ask me what to do)"
easy mapping of nightmarish database tables to moose classes. (I have to work with lots of multi-column tables that cannot be touched). Build your own graphical ORM-Mapper!
there might be even more advantages
I haven't seen a UML tool yet for Moose. It wouldn't be that difficult to build one, just a little labor intensive. Mostly it would require crawling the meta-class tree for a given class and outputting the proper UML markup for each step. If are interested in building something like this, you can stop by #moose on irc.perl.org. Someone I'm sure can help point you in the right direction.
Just stumbled across your question while looking for "UML Moose perl".
One of the other links thrown up was to a utility called umlclass.pl that looks quite interesting.
I'll post a follow-up after seeing how well it works.

Best way to organize bioinformatics projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I come from a computer science. background, but I am now doing genomics.
My projects include a lot of bioinformatics typically involving: aligning sequences, comparing overlap, etc. between sequences and various genome-annotation-features, from different classes of biological samples, time-course data, microarray, high-throughput sequencing ("next-generation" sequencing, though it's the current generation actually) data, this kind of stuff.
The workflow with this kind of analyses is quite different from what I experienced during my computer science studies: no UML and thoughtfully designed objects shining with sublime elegance, no version management, no proper documentation (often no documentation at all), no software engineering at all.
Instead, what everyone does in this field is hacking out one Perl-script or AWK-one-liner after the other, usually for one-time usage.
I think the reason is that the input data and formats change so fast, the questions need to be answered so soon (deadlines!), that there seems to be no time for project organization.
One example to illustrate this: Let's say you want to write a raytracer. You would probably put a lot of effort into the software engineering first. Then program it, finally in some highly-optimized form. Because you would use the raytracer countless of times with different input data and would make changes to the source code over a duration of years to come. So good software engineering is paramount when coding a serious raytracer from scratch. But imagine you want to write a raytracer, where you already know that you will use it to raytrace one, single picture ever. And that picture is of a reflecting sphere over a checkered floor. In this case you would just hack it together somehow. Bioinformatics is like the latter case only.
You end up with whole directory trees with the same information in different formats until you have reached the one particular format necessary for the next step, and dozen of files with names like "tmp_SNP_cancer_34521_unique_IDs_not_Chimp.csv" where you don't have the slightest idea one day later why you created this file and what it exactly is.
For a while I was using MySQL which helped, but now the speed in which new data is generated and changes formats is such that it is not possible to do proper database design.
I am aware of one single publication which deals with these issues (Noble, W. S. (2009, July). A quick guide to organizing computational biology projects. PLoS Comput Biol 5 (7), e1000424+). The author sums the goal up quite nicely:
The core guiding principle is simple:
Someone unfamiliar with your project
should be able to look at your
computer files and understand in
detail what you did and why.
Well, that's what I want, too! But I am following the same practices as that author already, and I feel it is absolutely insufficient.
Documenting each and every command you issue in Bash, commenting it with why exactly you did it, etc., is just tedious and error-prone. The steps during the workflow are just too fine-grained. Even if you do it, it can be still an extremely tedious task to figure out what each file was for, and at which point a particular workflow was interrupted, and for what reason, and where you continued.
(I am not using the word "workflow" in the sense of Taverna; by workflow I just mean the steps, commands and programs you choose to execute to reach a particular goal).
How do you organize your bioinformatics projects?
I'm a software specialist embedded in a team of research scientists, though in the earth sciences, not the life sciences. A lot of what you write is familiar to me.
One thing to bear in mind is that much of what you have learned in your studies is about engineering software for continued use. As you have observed a lot of what research scientists do is about one-off use and the engineered approach is not suitable. If you want to implement some aspects of good software engineering you are going to have to pick your battles carefully.
Before you start fighting any battles, you are going to have to critically examine your own ideas to ensure that what you learned in school about general-purpose software engineering is valid for your current situation. Don't assume that it is.
In my case the first battle I picked was the implementation of source code control. It wasn't hard to find examples of all the things that go wrong when you don't have version control in place:
some users had dozens of directories each with different versions of the 'same' code, and only the haziest idea of what most of them did that was unique, or why they were there;
some users had lost useful modifications by overwriting them and not being able to remember what they had done;
it was easy to find situations where people were working on what should have been the same program but were in fact developing incompatibly in different directions;
etc etc etc
Once I had gathered the information -- and make sure you keep good notes about who said what and what it cost them -- it became relatively easy to paint a picture of a better world with source code control.
Next, well, next you have to choose your own next battle. But one of the seeds of doubt you have to sow in your scientist-colleagues minds is 'reproducibility'. Scientific experiments are not valid if they are not reproducible; if their experiments involve software (and they always do) then careful software engineering is essential for reproducibility. A lot of this is about data provenance, but that's a topic for another day.
Part of the issue here is the distinction between documentation for software vs documentation for publication.
For software development (and research plan) design, the important documentation is structural and intentional. Thus, modeling the data, reasons why you are doing something, etc. I strongly recommend using the skills you've learned in CS for documenting your research plan. Having a plan for what you want to do gives you a lot of freedom to multi-task while long analyses are running.
On the other hand, a lot of bioinformatics work is analysis. Here, you need to treat documentation like a lab notebook, and not necessarily a project plan. You want to be document what you did, maybe a brief comment why (e.g. when you are troubleshooting data), and what the outputs and results are.
What I do is fairly simple.
First, I start in a directory and create a git repo. Then, whenever I change some file, I commit it to the repo. As much as possible, I try to name data outputs in a way that I can drop then into my git ignore files.
Then, as much as possible, I work on a single terminal session for a project at a time, and when I hit a pause point (like when I've got a set of jobs sent up to the grid, I run 'history |cut -c 8-' and paste that into a lab notes file. I then edit the file to add comments for what I did, and remember, change the git add/commit lines to git checkout (I have a script that does this based on the commit messages). As long as I start it in the right directory, and my external data doesn't go away, this means that I can recreate the entire process later.
For any even slightly complex processing tasks, I write a script to do it, so that my notebook, as much as possible, looks clean. To an approximation, a helper script can be viewed as a subroutine in a larger project, and should be documented internally to at least that level.
Your question is about project management. Bad project management is not unique to bioinformatics. I find it hard to believe that the entire industry of bioinformatics is commited to bad software design.
About the presure... Again there are others in this world that have very challenging deadlines, and they are still using good software designs.
In many cases, following a good software design does not hold down the projects and may even speed its design and maintainance (at least on the long run).
Now to your real question... You can offer your manager to redesign small parts of the code that have no influence on the rest of the code as a proof of concept (POC), but it's really hard to stop a truck from keep on moving, so don't get upset if he feels "we worked this way for years - we know what we are doing, and we don't need a child to teach us how to do our work". Learn to work like the rest and when you will gain their trust, you could
do your thing once in a while (I hope you will have time and the devotion to do the right thing).
Good luck.

How do you collaboratively write specs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working with a small team (2 others) of developers that are geographically dispersed, and I'm looking for good ways for us to collaborate on specs... We're thinking we might use Google Docs to write the spec in so we can all have access to modify it in a central location.
What have you done? What good ideas do you have?
If you have an intranet or VPN, I would actually consider installing and using a small Wiki for these specs.
Compared to Google docs you get:
Much better versioning and change tracking (IMHO)
Much easier to start new documents for subsections
An actual markup rather than WYSIWYG (a matter of taste, I prefer LaTeX to Word).
Possible to attach variety of other file types
Very easy to backup
Very easy to create an offline version
You don't have to worry about storing sensitive materials elsewhere.
The disadvantage is that it is not WYSIWYG, which may or may not be an issue to you.
Of course, you can pick a Wiki implementation that supports a better editor, and possibly even a synchronous collaboration one.
Google Wave - exactly what it's meant for - collaboration
IMHO, a word processor is the wrong tool for a programmer. A spec should be written in a plain text editor, and utilize lightweight markup such as reStructuredText, AsciiDoc etc.
The benefits of such an approach are:
There are excellent tools to manage plain text, that are already in the hands of programmers (VCS, automated build systems, diff, patch, programming editors, grep, etc.)
A markup language allow for expressing intent rather then formatting.
That in mind, a Wiki seem to be the obvious choice.
Personally my tool chain of choice is:
reStructuredText as the markup language.
Trac as a Wiki
Firefox + the it's all text extension
Emacs + rst-mode
The choice of technology is one issue and Google docs is a good choice IMHO. But the real challenge is how to manage the process e.g. divide the tasks.
My suggestion is to first make sure that the platform and all related technologies are decided-upon as best as feasible. Then, compose a a thorough table of contents. A well-designed TOC will allow you to divide tasks properly and not "step" on each others' work. From then on you each "flesh" out your assigned sections as well as review each others' work.
In effect, each TOC subsection becomes an atomic unit of work that can be assigned and maintained by an individual who is also accountable for said section(s).
Good luck!
I think it depends on
How heavily into writing the specs you all are
If you're likely writing at the same time
Whether you intend to publish the specs.
Google Docs is nice and easy to get started with. It's also great that you can now export folders all at once. Still, for something that's going to be published to the web, a wiki or general cms is a better presentation vehicle. A wiki will also integrate with your existing site.
If you've got small specs, primarily written by one person then use whatever tool is available where you're hosting the project code or website. If you're not likely to be editing at the same time then a wiki is good.
I've done the wiki thing, the passed document thing and the Google Docs thing.
The wiki thing has a low starting effort and lasts a pretty long time. At a certain size it does get to be a pain.
The passed document thing (writes, email, edit, email, etc) only works while one person is starting everything up. As soon as there are even minor edits then it sucks.
The Google Docs thing is fine until you have several docs and several editors or want to publish it online.
hth
This isn't programming related, but I've personally used Google Docs to write shared documents and found it easy to use.
I would suggest enabling Google Gears however, in the event that the Google servers go down momentarily or an internet connection isn't available.
For writing specs collaboratively, you could try Gingko.
It's a card-tree editor, which means it's a mix between index cards and an outliner, with real-time collaboration and full Markdown support (as well as basic LaTeX).
We are still missing several features (version history, comments, etc), but for some the benefits of having everything in a tree structure outweigh these drawbacks.
Writing specs with it is great, because you can create a card for each user story, and drill into it as much as you like (and organize them into categories if you'd like).
http://gingkoapp.com

How to keep code and specs in sync? - are there good tools [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
In my team we've got a great source control system and we have great specs. The problem I'd like to solve is how to keep the specs up-to-date with the code. Over time the specs tend to age and become out of date
The folks making the specs tend to dislike source control and the programmers tend to dislike sharepoint.
I'd love to hear what solutions others use? is there a happy middle somewhere?
Nope. There's no happy middle. They have different audiences and different purposes.
Here's what I've learned as an architect and spec writer: Specifications have little long-term value. Get over it.
The specs, while nice to get programming started, lose their value over time no matter what you do. The audience for the specification is a programmer who doesn't have much insight. Those programmers morph into deeply knowledgeable programmers who no longer need the specs.
Parts of the specification -- overviews in particular -- may have some long-term value.
If the rest of the spec had value, the programmers would keep them up to date.
What works well is to use comments embedded in the code and a tool to extract those comments and produce the current live documentation. Java does this with javadoc. Python does this with epydoc or Sphinx. C (and C++) use Doxygen. There are a lot of choices: http://en.wikipedia.org/wiki/Comparison_of_documentation_generators
The overviews should be taken out of the original specs and placed into the code.
A final document should be extracted. This document can replace the specifications by using the spec overviews and the code details.
When major overhauls are required, there will be new specifications. There may be a need to revisions to existing specifications. The jumping-off point is the auto-generated specification documents. The spec. authors can start with those and add/change/delete to their heart's content.
I think a non-Sharepoint wiki is good for keeping documentation up to date. Most non-technical people can understand how to use a wiki, and most programmers will be more than happy to use a good wiki. The wiki and documentation control systems in Sharepoint are clunky and frustrating to use, in my opinion.
Mediawiki is a good choice.
I really like wikis because they are by far the lowest pain to adopt and keep up. They give you automatic version control, and are usually very intuitive for everyone to use. A lot of companies will want to use Word, Excel, or other types of docs for this, but getting everything online and accessible from a common interface is key.
As much as possible the spec should be executable, via rspec, or doctest and similar frameworks. The spec of the code should be documented with unit tests and code that has well named methods and variables.
Then the spec documentation (preferably in a wiki) should give you the higher level overview of things - and that won't change much or get out of sync quickly.
Such an approach will certainly keep the spec and the code in sync and the tests will fail when they get out of sync.
That being said, on many projects the above is kind of pie-in-the-sky. In that case, S. Lott is right, get over it. They don't stay in sync. Look to the spec as the roadmap the developers were given, not a document of what they did.
If having a current spec is very important, then there should be specific time on the project allocated to write (or re-write) the spec after the code is written. Then it will be accurate (Until the code changes).
An alternative to all of this is to keep the spec and the code under source control and have check-ins reviewed to ensure that the spec changed along with the code. It will slow down the development process, but if it is really that important ...
One technique used to keep the documentation in sync with the code is literate programming. This keeps the code and the documentation in the same file and uses a preprocessor to generate the compilable code from the documentation. As far as I know this is one of the techniques Donald Knuth uses - and he's happy to pay people money if they find bugs in his code.
I don't know of any particularly good solution for precisely what you're describing; generally, the only solutions that I've seen that really keep this sort of stuff in sync are tools that generate documentation from the source code (doxygen, Javadoc).

Are there any version control systems for 3d models / 3d data? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Well the subject is the question basically. Are there any version control systems out there for 3d models. An open source approach would be preferred of course.
I am looking for functionality along the lines of subversion however more basic systems would be of interest as well. Basic operations like branching / merging / commit should be available in one form or another.
UPDATE: With open source approach I don't mean free but the option of heavily expanding and customizing the system when needed
UPDATE2: I don't know how to describe this in the best way but the format of the 3d model is not that important. We will be using IFC models and mostly CAD programs. The approach Adam Davis describes is probably what I am looking for.
This is going to be difficult since most 3D CAD programs do not take into account the possibility of revision, so when you load something and then save it again it may completely re-order the points (there are reasons for this, usually done for performance).
Further, large models represented in a text format are huge files, and will take forever to copy/merge/etc.
There is no current system that will manage this, but there's a really big need in the industry for it.
I would expect such a system would have a model normalizer that converts to and from the desired CAD format and the revision format. It could then handle merges and track changes more easily.
It would also need to output diffs in a form that you could open a "diffed" model in a cad program and the changes are shown in a different color or otherwise highlighted. No one is going to be able to look at a text diff and understand what they're looking at. This diffing program would ultimately need to support understanding that two models are the same even though the 0,0,0 location and rotation are not the same (difficult matching problem) and give the user some interface to allow them to help it when it gets stuck.
You'd probably have to deal with the parts of the model separately (bones, mesh, textures, etc) and have a third file that synchronizes them when converting them to an inclusive model file for use and modification.
It's not a trivial problem... But if you started on something that just handled meshes and open sourced it, you'd probably get a lot of people interested.
Although this question is old, it's still in Google's results for 3d version control. Luckily in the years that have passed since the question was asked, Github has started supporting 3d STL files with visual diffs!!
Have a look at http://3drepo.org
It's open source revision control framework for 3D assets and highly extensible.
Although I realize it's a slightly different topic, you might be interested in the answers to the question Version Control for Graphics...
Similar to what GingaNinja said, if all you care about is management of binary files at different revisions, most revision control systems will work for you. However, if you are looking for a tool that will display the changes in the actual images, you might be hard pressed to find a recommendation of a tool here. I would start by asking on a graphic artist forums.
There is a diff tool for common 3D formats being released in about a week. It supports dxf/dwg, obj, stl, igs. It may not be perfect since it is still in version 1 but hopefully it can help with your problem. The tool is called Differ3D and it can be found at http://www.blackspiralsoftware.com.
Disclaimer: I work for the company that released this product. We are looking to improve it so any feedback would be welcome.
I was under the impression that SVN is perfect for any kind of project that uses text files. So if your model is made up of text files, then it would be fine.
I don't see how binary data would work, as all version control that I know of makes use of diff management, which uses text comparisons.
3d models and data are just data files whether their format is text or binary. Version control systems can handle both since often you check in libraries etc, which are binary files.
I'm not quite sure what you mean by "open source approach". Do you mean a free solution? You can get open source projects which have to pay for, depending on your usage, e.g. Qt.
Subversion or CVS would store text or binary models and are both free. Subversion is preferrable to CVS since it can commit multiple files in change sets. On Windows you can use TortoiseSVN, which is an excellent, free tool set.
If you use Subversion you must remember to lock (assuming the files are binary, which nearly all 3D model formats are). Other than Subversion and other OSS like it, you might look at Gridiron Flow- the new content/workflow management software from Gridiron Software. John Nack of Adobe gave it a rave review.
DXF is a text-file standard (similarish to XML) but I don't think merging these types of files is a particularly good idea.
If you wanted to perform a Diff operation on 2 AutoCAD files, you can programmatically address individual objects by their "Handle" - a unique hex identifier. Location, rotation, scaling, colour etc are properties of the object. CAD drawings are basically an object database. I don't know of any product that does this. Change tracking is a viable proposition but merging would be a lot more complicated.