Collection of Great Applications and Programs using Macros - macros

I am very very interested in Macros and just beginning to understand its true power. Please help me collect some great usage of macro systems.
So far I have these constructs:
Pattern Matching:
Andrew Wright and Bruce Duba. Pattern
matching for Scheme, 1995
Relations in the spirit of Prolog:
Dorai Sitaram. Programming in schelog.
http://www.ccs.neu.edu/home/dorai/schelog/schelog.html
Daniel P. Friedman, William E. Byrd,
and Oleg Kiselyov. The Reasoned
Schemer. The MIT Press, July 2005
Matthias Felleisen. Transliterating
Prolog into Scheme. Technical Report
182, Indiana University, 1985.
Extensible Looping Constructs:
Sebastian Egner. Eager comprehensions
in Scheme: The design of SRFI-42. In
Workshop on Scheme and Functional
Programming, pages13–26, September
2005.
Olin Shivers. The anatomy of a loop: a
story of scope and control. In
International Conference on Functional
Programming, pages 2–14, 2005.
Class Systems:
PLT. PLT MzLib: Libraries manual.
Technical Report PLT-TR2006-4-v352,
PLT Scheme Inc., 2006.
http://www.plt-scheme.org/techreports/
Eli Barzilay. Swindle.
http://www.barzilay.org/Swindle.
Component Systems:
Ryan Culpepper, Scott Owens, and
Matthew Flatt. Syntactic abstraction
in component interfaces. In
International Conference on Generative
Programming and Component Engineering,
pages 373–388, 2005
Software Contract Checking
Matthew Flatt and Matthias Felleisen.
Units: Cool modules for HOT languages
In ACM SIGPLAN Conference on
Programming Language Design and
Implementation, pages 236–248, 1998
Oscar Waddell and R. Kent Dybvig.
Extending the scope of syntactic
abstraction.In Symposium on Principles
of Programming Languages, pages
203–215, 199
Parser Generators
Scott Owens, Matthew Flatt, Olin
Shivers, and Benjamin McMullan. Lexer
and parser generators in Scheme. In
Workshop on Scheme and Functional
Programming, pages 41–52, September
2004.
Tools for Engineering Semantics:
Matthias Felleisen, Robert Bruce
Findler, and Matthew Flatt. Semantics
Engineering with PLT Redex. MIT Press,
August 2009.
Specifications of Compiler Transformations:
Dipanwita Sarkar, Oscar Waddell, and R. Kent Dybvig. A nanopass
framework for compiler education.
Journal of Functional
Programming,15(5):653–667, September
2005. Educational Pearl.
Novel Forms of Execution
Servlets with serializable
continuations Greg Pettyjohn, John
Clements, Joe Marshall, Shriram
Krishnamurthi, and Matthias Felleisen.
Continuations from generalized stack
inspection. In International
Conference on Functional Programming,
pages216–227, 2005.
Theorem-Proving System
Sebastian Egner. Eager comprehensions in Scheme: The design
of SRFI-42.
In Workshop on Scheme and Functional Programming, pages 13–26,
September 2005.
Extensions of the Base Language with Types
Sam Tobin-Hochstadt and Matthias
Felleisen.The design and
implementation of typed scheme. In
Symposium on Principles of Programming
Languages, pages 395–406, 2008.
Laziness
Eli Barzilay and John Clements.
Laziness without all the hard
work:combining lazy and strict
languages for teaching. In Functional
and declarative programming in
education, pages 9–13, 2005.
Functional Reactivity
Gregory H. Cooper and Shriram
Krishnamurthi. Embedding dynamic
dataflow in a call-by-value language.
In European Symposium on Programming,
2006
Reference:
Collected from Ryan Culpepper's Dissertation

Culpepper & Felleisen, Fortifying Macros, ICFP 2010
Culpepper, Tobin-Hochstadt and Felleisen, Advanced Macrology and the Implementation of Typed Scheme, Scheme Workshop 2007
Flatt, Findler, Felleisen, Scheme with Classes, Mixins, and Traits, APLAS 2006
Herman, Meunier, Improving the Static Analysis of Embedded Languages via Partial Evaluation, ICFP 2004

Shivers, Carlstrom, Gasbichler & Sperber (1994 & later) The Scsh Reference manual.
Has a lot of good examples of using macros to embed mini-languages into Scheme. Introduced me to the technique of defining macros that implicitly quote their argument. Look at the use of process forms, regular expressions, and the awk-like mini-languages. Scsh is my recommendation as a starting point for playing with macros.
Hilsdale & Friedman (2000) Writing Macros in Continuation-Passing Style.
Shows how the weak syntax-rules macros can be made powerful using continuation-passing style. Gives plenty of examples.
Flatt, Culpepper, Darais & Findler (submitted) Macros that Work Together - Compile-Time Bindings, Partial Expansion, and Definition Contexts.
Provides an overview of, and semantics for the approach to macros in Racket/PLT Scheme. Not a whole lot of examples, but I think the paper has something you are looking for.

ReadScheme! Remember to check the extensive bibliography on ReadScheme.
http://library.readscheme.org/page3.html
One example I think you missed is embedding SQL syntax into Scheme.
http://repository.readscheme.org/ftp/papers/sw2002/schemeunit-schemeql.pdf
Macros are also used to write supports for automated testing.

Not a Scheme, but somewhat similar Lisp dialect with a very extensive use of macros: http://www.meta-alternative.net/mbase.html
There are macros implementing various kinds of pattern matching, list comprehensions, various parsers generators (including a PEG/Packrat implementation), embedded Prolog, ADT visitors inference (like scrap your boilerplate in Haskell), extensible syntax macros, Hindley-Milner type system, Scheme-like syntax macros, and many more. Parts of that functionality can be potentially ported to Scheme, other parts needs an extended macro system with explicit context.

I would add "The Scheme standard library itself" to the list. Look at the file boot-9.scm in the guile distribution. Many of the most commonly-used Scheme forms - case, and, etc. - are defined there as macros.

Here's an example of a pretty awesome use of scheme macros to create efficient robotics systems written in scheme

This isn't particularly precise in so far as it is spread out over a large number of very old publications most of which i've never read, but IIRC large chunks of the Common Lisp Object System and the Meta-Object Protocol*, can be; are; or were initially, built with of macros...
* Which compose IMHO by far the most advanced OO system programming has ever seen

Check one of my favorites implementations of a REST API: the Slack api client, that isn't written in Scheme but it is in Racket.
octotep/racket-slack-api

Related

What is the name of the programming style enabled by dependent types (think Coq or Agda)?

There is a programming "style" (or maybe paradigm, i'm not sure what to call it) which is as follows:
First, you write a specification: a formal description of what your (whole, or part of) program is to do. This is done within the programming system; it is not a separate artifact.
Then, you write the program, but - and this is the key distinction between this programming style and others - every step of this writing task is guided in some way by the specification you've written in the previous step. How exactly this guidance happens varies wildly; in Coq you have a metaprogramming language (Ltac) which lets you "refine" the specification while building the actual program behind the scenes, whereas in Agda you compose a program by filling "holes" (i'm not actually sure how it goes in Agda, as i'm mostly used to Coq).
This isn't exactly everyone's favorite style of programming, but i'd like to try practicing it in general-purpose, popular programming languages. At least in Coq i've found it to be fairly addictive!
...but how would i even search for ways to do it outside proof assistants? Which leads us to the question: I'm looking for a name for this programming style, so that i can try looking up tools that let me program like that in other programming languages.
Mind you, of course a more proper question would be directly asking for examples of such tools, but AFAIK questions asking for lists of answers aren't appropriate for Stack Exchange sites.
And to be clear, i'm not all that hopeful i'm really going to find much; these are mostly academic pastimes, and your typical programming language isn't really amenable to this style of programming (for example, the specification language might end up being impossibly complex). But it's worth a shot!
It is called proof-driven development (or type-driven development). However, there is very little information about it.
This process you mention about slowly creating your program by means of ltac (in the case of coq) or holes (in the case of Agda and Idris) is called refinement. So you will also find reference in the literature for this style as proof by refinement or programming by refinement.
Now the most important thing to realize is that this style of programming is intrinsic to more complex type system that will allow you to extract as much information as possible the current environment. So it is natural to find attached with dependent types, although it is not necessarily the case.
As mentioned in another response you're also going to find references to it as Type-Driven Development, there is an idris book about it.
You may be interested in looking into some other projects such as Lean, Isabelle, Idris, Agda, Cedille, and maybe Liquid Haskell, TLA+ and SAW.
As pointed out by the two previous answers, a possible name for the program style you mention certainly is: type-driven development.
From the Coq viewpoint, you might be interested in the following two references:
Certified Programming with Dependent Types (CPDT, by Adam Chlipala): a Coq textbook that teaches advanced techniques to develop dependently-typed Coq theories and automate related proofs.
Experience Report: Type-Driven Development of Certified Tree Algorithms in Coq (by Reynald Affeldt, Jacques Garrigue, Xuanrui Qi, Kazunari Tanaka), published at the Coq Workshop 2019 (slides, extended abstract):
The authors also use the acronym TDD, which interestingly enough, also has another acceptation in the software engineering community: test-driven development (this widely used methodology naturally leads to high-quality test suites).
Actually, both acceptations of TDD share a common idea: one systematically starts by writing the specification (of the considered unit), then only after that, writing some code that fulfills the spec (make the unit tests pass), then we loop and incrementally specify+implement(+refactor) other code units.
Last but not least, there are some extra pointers in this discussion from the Discourse OCaml forum.

How will Dotty change pure functional programming in Scala?

In this question from 2013, Mr. Odersky notes that "it's too early to tell" whether libraries like Scalaz will be able to exist (at least in their current state) under Dotty, due to the castration of higher-kinded and existential types.
In the time passed, has Dotty's implications for Scalaz & Cats been elucidated? Will proposed features like built-in Effects and Records change the scope of these projects?
I understand that Dotty is still a ways off from replacing scalac, but as I am considering investing time applying purely functional constructs and methodologies to my work, I believe it important to consider the future of its flagship libraries.
One example of the latest on Dotty is "Scaling Scala" By Chris McKinlay (December 15, 2016) (the same article also mention the Scalaz and Cats situation)
Martin Odersky has been leading work on Dotty, a novel research compiler based on the Dependent Object Types (DOT) calculus (basically a simplified version of Scala) and ideas from the functional programming (FP) database community.
The team working on Dotty development has shown some remarkable improvements over the state of the art, most notably with respect to compilation times. I asked Odersky what he thought was novel about the Dotty architecture and would help end users. Here’s what he said:
Two things come to mind:
first, it's closely related to formal foundations, giving us better guidance on how to design a sound typesystem. This will lead to fewer surprises for users down the road.
Second, it has an essentially functional architecture. This makes it easier to extend, easier to get correct, and will lead to more robust APIs where the compiler is used as a service for IDEs and meta programming.
Although Dotty opens up a number of interesting language possibilities (notably full-spectrum dependent types, a la Agda and Idris), Odersky has chosen to prioritize making it immediately useful to the community. Language differences are fairly small, and most of them are in order to either simplify the language (like removing procedure syntax) or fix bugs (unsound pattern matching) or both (early initializers).
Still, I couldn’t resist asking him if there is any chance of full-spectrum dependent types ending up in Scala at some point. Here is what he said:
Never say never :-). In fact, we are currently working with Viktor Kuncak on integrating the Leon program prover with Scala, which demands richer dependent types than we have now. But it's currently strictly research, with a completely open outcome.
The Scala and Dotty teams are working closely toward convergence for Scala 2.x and Dotty, and they’ve indicated that they take continuity very seriously. Scala 2.12 and 2.13 have language flags that unlock features being incubated in Dotty (e.g., existential types), and the Dotty compiler has a Scala 2 compatibility mode. There’s even a migration tool.

Type systems of functional object-oriented languages

I would like to know how exactly modern typed functional object-oriented languages, such as Scala and OCaml, combine parametric polymorphism, subtyping and other their features.
Are they based on System F<:, or something stronger, or weaker?
Is there a well-studied formal type system, like System FC for Haskell, which could serve as a "core" for these languages?
OCaml
The "core" of OCaml type theory consists of extensions of System F,
but the module system corresponds to a mix of F<:
(modules can be coerced into stricter signature by subtyping) and
Fω.
In the core language (without considering subtyping at the
module level), subtyping is very restricted in OCaml, as subtyping
relations cannot be abstracted over (there is no
bounded quantification). The language emphasizes polymorphic
parametrism instead, and in particular even the "extensible" type it
supports use row polymorphism at their core (with a convenience layer
of subtyping between closed such types).
For an introduction to type-theoretic presentations of OCaml, see the online book by Didier Remy, Using, Understanding, and Unraveling the OCaml Language (From Practice to Theory and vice versa) . Its further reading chapter will give you more reference, in particular about the treatment of object-orientation.
There has been a lot of work on formalizations of the module system part; arguably, the ML module systems do not naturally fit Fω or F<:ω as a core formalism (for once, type parameters are named in a module system, instead of being passed by position as in lambda-calculi). One of the best explanations of the correspondence is F-ing modules, first published in 2010 by Andreas Rossberg, Claudio Russo and Derek Dreyer.
Jacques Garrigue has also done a lot of work on the more advanced features of the language (that cannot be summarized as "just syntactic sugar over system F"), namely Polymorphic Variants (equi-recursives structural types), labelled arguments, and GADTs). Various descriptions of these aspects can be found on his webpage, including mechanized proofs (in Coq) of polymorphic variants and the relaxed value restriction.
You should also look at the webpage A few papers on Caml, which points to some of the research article around the OCaml language.
Scala
The similar page for Scala is this one. Particularly relevant to your question are:
A Core Calculus for Scala Type Checking, by Vincent Cremet, François Garillot, Sergueï Lenglet and Martin Odersky, 2006
Generics of a Higher Kind, by Adriaan Moors, Frank Piessens, and Martin Odersky, 2008

Where to find the details of Scala's type inference?

I want to understand how type checking/algorithm works. It's very complicated and there're a lot of cases. Is there any good tutorial/documentation for this (I am aware of language specification but IMO, it's too hard to read).
I simply want the details of how Scala's type inference works under the hood.
It's actually not very complicated. A very concise description can be found in section 16.9 of Odersky/Spoon/Venners's book 'Programming in Scala' (1st edition; in the second edition I belief it is section 16.10):
http://www.artima.com/pins1ed/working-with-lists.html#16.9
So if this is too basic, maybe the following paper helps you:
Vincent Cremet, François Garillot, Sergueï Lenglet and Martin Odersky, "A Core Calculus for Scala Type Checking", in: Lecture Notes in Computer Science, 2006, Volume 4162/2006, 1-23, DOI: 10.1007/11821069_1 (Springer).
You can find an accessible PDF version through Google Scholar.
Or you may want to look at the sources of Scala 2.12.x in https://github.com/scala/scala/blob/2.12.x/src/compiler/scala/tools/nsc/typechecker/Infer.scala.

Language requirements for AI development [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why is Lisp used for AI?
What makes a language suitable for Artificial Intelligence development?
I've heard that LISP and Prolog are widely used in this field. What features make them suitable for AI?
Overall I would say the main thing I see about languages "preferred" for AI is that they have high order programming along with many tools for abstraction.
It is high order programming (aka functions as first class objects) that tends to be a defining characteristic of most AI languages http://en.wikipedia.org/wiki/Higher-order_programming that I can see. That article is a stub and it leaves out Prolog http://en.wikipedia.org/wiki/Prolog which allows high order "predicates".
But basically high order programming is the idea that you can pass a function around like a variable. Surprisingly a lot of the scripting languages have functions as first class objects as well. LISP/Prolog are a given as AI languages. But some of the others might be surprising. I have seen several AI books for Python. One of them is http://www.nltk.org/book. Also I have seen some for Ruby and Perl. If you study more about LISP you will recognize a lot of its features are similar to modern scripting languages. However LISP came out in 1958...so it really was ahead of its time.
There are AI libraries for Java. And in Java you can sort of hack functions as first class objects using methods on classes, it is harder/less convenient than LISP but possible. In C and C++ you have function pointers, although again they are much more of a bother than LISP.
Once you have functions as first class objects, you can program much more generically than is otherwise possible. Without functions as first class objects, you might have to construct sum(array), product(array) to perform the different operations. But with functions as first class objects you could compute accumulate(array, +) and accumulate(array, *). You could even do accumulate(array, getDataElement, operation). Since AI is so ill defined that type of flexibility is a great help. Now you can build much more generic code that is much easier to extend in ways that were not originally even conceived.
And Lambda (now finding its way all over the place) becomes a way to save typing so that you don't have to define every function. In the previous example, instead of having to make getDataElement(arrayelement) { return arrayelement.GPA } somewhere you can just say accumulate(array, lambda element: return element.GPA, +). So you don't have to pollute your namespace with tons of functions to only be called once or twice.
If you go back in time to 1958, basically your choices were LISP, Fortran, or Assembly. Compared to Fortran LISP was much more flexible (unfortunately also less efficient) and offered much better means of abstraction. In addition to functions as first class objects, it also had dynamic typing, garbage collection, etc. (stuff any scripting language has today). Now there are more choices to use as a language, although LISP benefited from being first and becoming the language that everyone happened to use for AI. Now look at Ruby/Python/Perl/JavaScript/Java/C#/and even the latest proposed standard for C you start to see features from LISP sneaking in (map/reduce, lambdas, garbage collection, etc.). LISP was way ahead of its time in the 1950's.
Even now LISP still maintains a few aces in the hole over most of the competition. The macro systems in LISP are really advanced. In C you can go and extend the language with library calls or simple macros (basically a text substitution). In LISP you can define new language elements (think your own if statement, now think your own custom language for defining GUIs). Overall LISP languages still offer ways of abstraction that the mainstream languages still haven't caught up with. Sure you can define your own custom compiler for C and add all the language constructs you want, but no one does that really. In LISP the programmer can do that easily via Macros. Also LISP is compiled and per the programming language shootout, it is more efficient than Perl, Python, and Ruby in general.
Prolog basically is a logic language made for representing facts and rules. What are expert systems but collections of rules and facts. Since it is very convenient to represent a bunch of rules in Prolog, there is an obvious synergy there with expert systems.
Now I think using LISP/Prolog for every AI problem is not a given. In fact just look at the multitude of Machine Learning/Data Mining libraries available for Java. However when you are prototyping a new system or are experimenting because you don't know what you are doing, it is way easier to do it with a scripting language than a statically typed one. LISP was the earliest languages to have all these features we take for granted. Basically there was no competition at all at first.
Also in general academia seems to like functional languages a lot. So it doesn't hurt that LISP is functional. Although now you have ML, Haskell, OCaml, etc. on that front as well (some of these languages support multiple paradigms...).
The main calling card of both Lisp and Prolog in this particular field is that they support metaprogramming concepts like lambdas. The reason that is important is that it helps when you want to roll your own programming language within a programming language, like you will commonly want to do for writing expert system rules.
To do this well in a lower-level imperative language like C, it is generally best to just create a separate compiler or language library for your new (expert system rule) language, so you can write your rules in the new language and your actions in C. This is the principle behind things like CLIPS.
The two main things you want are the ability to do experimental programming and the ability to do unconventional programming.
When you're doing AI, you by definition don't really know what you're doing. (If you did, it wouldn't be AI, would it?) This means you want a language where you can quickly try things and change them. I haven't found any language I like better than Common Lisp for that, personally.
Similarly, you're doing something not quite conventional. Prolog is already an unconventional language, and Lisp has macros that can transform the language tremendously.
What do you mean by "AI"? The field is so broad as to make this question unanswerable. What applications are you looking at?
LISP was used because it was better than FORTRAN. Prolog was used, too, but no one remembers that. This was when people believed that symbol-based approaches were the way to go, before it was understood how hard the sensing and expression layers are.
But modern "AI" (machine vision, planners, hell, Google's uncanny ability to know what you 'meant') is done in more efficient programming languages that are more sustainable for a large team to develop in. This usually means C++ these days--but it's not like anyone thinks of C++ as a good language for AI.
Hell, you can do a lot of what was called "AI" in the 70s in MATLAB. No one's ever called MATLAB "a good language for AI" before, have they?
Functional programming languages are easier to parallelise due to their stateless nature. There seems to already be a subject about it with some good answers here: Advantages of stateless programming?
As said, its also generally simpler to build programs that generate programs in LISP due to the simplicity of the language, but this is only relevant to certain areas of AI such as evolutionary computation.
Edit:
Ok, I'll try and explain a bit about why parallelism is important to AI using Symbolic AI as an example, as its probably the area of AI that I understand best. Basically its what everyone was using back in the day when LISP was invented, and the Physical Symbol Hypothesis on which it is based is more or less the same way you would go about calculating and modelling stuff in LISP code. This link explains a bit about it:
http://www.cs.st-andrews.ac.uk/~mkw/IC_Group/What_is_Symbolic_AI_full.html
So basically the idea is that you create a model of your environment, then searching through it to find a solution. One of the simplest to algorithms to implement is a breadth first search, which is an exhaustive search of all possible states. While producing an optimal result, it is usually prohibitively time consuming. One way to optimise this is by using a heuristic (A* being an example), another is to divide the work between CPUs.
Due to statelessness, in theory, any node you expand in your search could be ran in a separate thread without the complexity or overhead involved in locking shared data. In general, assuming the hardware can support it, then the more highly you can parallelise a task the faster you will get your result. An example of this could be the folding#home project, which distributes work over many GPUs to find optimal protein folding configurations (that may not have anything to do with LISP, but is relevant to parallelism).
As far as I know from LISP is that is a Functional Programming Language, and with it you are able to make "programs that make programs. I don't know if my answer suits your needs, see above links for more information.
Pattern matching constructs with instantiation (or the ability to easily construct pattern matching code) are a big plus. Pattern matching is not totally necessary to do A.I., but it can sure simplify the code for many A.I. tasks. I'm finding this also makes F# a convenient language for A.I.
Languages per se (without libraries) are suitable/comfortable for specific areas of research/investigation and/or learning/studying ("how to do the simplest things in the hardest way").
Suitability for commercial development is determined by availability of frameworks, libraries, development tools, communities of developers, adoption by companies. For ex., in internet you shall find support for any, even the most exotic issue/areas (including, of course, AI areas), for ex., in C# because it is mainstream.
BTW, what specifically is context of question? AI is so broad term.
Update:
Oooops, I really did not expect to draw attention and discussion to my answer.
Under ("how to do the simplest things in the hardest way"), I mean that studying and learning, as well as academic R&D objectives/techniques/approaches/methodology do not coincide with objectives of (commercial) development.
In student (or even academic) projects one can write tons of code which would probably require one line of code in commercial RAD (using of component/service/feature of framework or library).
Because..! oooh!
Because, there is no sense to entangle/develop any discussion without first agreeing on common definitions of terms... which are subjective and depend on context... and are not so easy to be formulate in general/abstract context.
And this is inter-disciplinary matter of whole areas of different sciences
The question is broad (philosophical) and evasively formulated... without beginning and end... having no definitive answers without of context and definitions...
Are we going to develop here some spec proposal?