What type of lambda calculus would Lisp loosely be an example of? - lisp

I'm trying to get a better grip on how types come into play in lambda calculus. Admittedly, a lot of the type theory stuff is over my head. Lisp is a dynamically typed language, would that roughly correspond to untyped lambda calculus? Or is there some kind of "dynamically typed lambda calculus" that I'm unaware of?

Lisp is a dynamically typed language, would that roughly correspond to untyped lambda calculus?
Yes, but only roughly. In the "pure" untyped lambda calculus, everything is coded as functions. (You can google for the popular "Church encoding" and the less popular "Scott encoding".) Lisp has non-functional data, like atoms and numbers and such, so this would count as "untyped lambda calculus extended with constants."
Another important difference is in order of evaluation. Rules for reducing lambda-calculus terms are highly nondeterministic. (There's a theorem, the Church-Rosser theorem, which says loosely that as long as things terminate, order of evaluation doesn't matter.) In practice lambda terms are typically reduced using leftmost-outermost aka "normal-order" reduction because if any reduction strategy terminates, that one does.
This is very different from Lisp which always evaluates arguments to normal form before doing a beta-reduction. This evaluation order is called "call by value."
In summary, Lisp corresponds to an untyped, call-by-value lambda calculus extended with constants.

John McCarthy introduced LISP in his April, 1960 paper "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". The following paragraph is from page 6:
e. Functions and Forms. It is usual in mathematics — outside of mathematical logic — to use the word “function” imprecisely and to apply it to forms
such as y2 + x. Because we shall later compute with expressions for functions,
we need a distinction between functions and forms and a notation for express-
ing this distinction. This distinction and a notation for describing it, from
which we deviate trivially, is given by Church [3].
...
3. A. CHURCH, The Calculi of Lambda-Conversion (Princeton University
Press, Princeton, N. J., 1941).
The Wikipedia article on lambda-calculus has a history of Church's publications. The 1941 paper referenced by McCarthy seems to be about the typed lambda-calculus, in contradiction to the Wikipedia article's introduction.
The lambda keyword in Lisp can be understood to refer to the lambda-calculus only through analogy. A Lisp lambda expression is a type of anonymous function.

Lisp is not 'a lambda calculus', I don't know what 'a lambda calculus' is.
If you want to identify lambda calculi by there type system then Lisp is its own of course. The 'lambda' keyword in any lisp before Scheme is certainly pretentious, and after Scheme there's room too to say it is. Just using 'func' would have been more humble. Lisp is a list processor mainly, not a 'lambda calculus'
I also wrote a rather extensive article about this once that attempts to demonstrate why A: the term 'functional programming' is meaningless and B: why the speaking of 'a lambda calculus' rather than 'a type system' is so too:
http://blog.nihilarchitect.net/archives/289/on-functional-programming/
Also, keep in mind that in Lisp, all functions are in effect single argument and can only be have lists as their arguments.

Related

lisp: when to use a function vs. a macro

In my ongoing quest to learn lisp, I'm running into a conceptual problem. It's somewhat akin to the question here, but maybe it's thematically appropriate to lisp that my question is a level of abstraction up.
As a rule, when should you create a macro vs. a function? It seems to me, maybe naively, that there would be very few cases where you must create a macro instead of a function, and that in most remainder cases, a function would generally suffice. Of these remainder cases, it seems like the main additional value of a macro would be in clarity of syntax. And if that's the case, then it seems like not just the decision to opt for macro use but also the design of their structures might be fundamentally idiosyncratic to the individual programmer.
Is this wrong? Is there a general case outlining when to use macros over functions? Am I right that the cases where a macro is required by the language are generally few? And lastly, is there a general syntactic form that's expected of macros, or are they generally used as shorthands by programmers?
I found a detailed answer, from Paul Graham's On Lisp, bold emphases added:
Macros can do two things that functions can’t: they can control (or prevent) the evaluation of their arguments, and they are expanded right into the calling context. Any application which requires macros requires, in the end, one or both of these properties.
...
Macros use this control in four major ways:
Transformation. The Common Lisp setf macro is one of a class of macros which pick apart their arguments before evaluation. A built-in access function will often have a converse whose purpose is to set what the access function retrieves. The converse of car is rplaca, of cdr, rplacd, and so on. With setf we can use calls to such access functions as if they were variables to be set, as in (setf (car x) ’a), which could expand into (progn (rplaca x ’a) ’a).
To perform this trick, setf has to look inside its first argument. To know that the case above requires rplaca, setf must be able to see that the first argument is an expression beginning with car. Thus setf, and any other operator which transforms its arguments, must be written as a macro.
Binding. Lexical variables must appear directly in the source code. The first argument to setq is not evaluated, for example, so anything built on setq must be a macro which expands into a setq, rather than a function which calls it. Likewise for operators like let, whose arguments are to appear as parameters in a lambda expression, for macros like do which expand into lets, and so on. Any new operator which is to alter the lexical bindings of its arguments must be written as a macro.
Conditional evaluation. All the arguments to a function are evaluated. In constructs like when, we want some arguments to be evaluated only under certain conditions. Such flexibility is only possible with macros.
Multiple evaluation. Not only are the arguments to a function all evaluated, they are all evaluated exactly once. We need a macro to define a construct like do, where certain arguments are to be evaluated repeatedly.
There are also several ways to take advantage of the inline expansion of macros. It’s important to emphasize that the expansions thus appear in the lexical context of the macro call, since two of the three uses for macros depend on that fact. They are:
Using the calling environment. A macro can generate an expansion containing a variable whose binding comes from the context of the macro call. The behavior of the following macro:
(defmacro foo (x) ‘(+ ,x y))
depends on the binding of y where foo is called.
This kind of lexical intercourse is usually viewed more as a source of contagion than a source of pleasure. Usually it would be bad style to write such a macro. The ideal of functional programming applies as well to macros: the preferred way to communicate with a macro is through its parameters. Indeed, it is so rarely necessary to use the calling environment that most of the time it happens, it happens by mistake...
Wrapping a new environment. A macro can also cause its arguments to be evaluated in a new lexical environment. The classic example is let, which could be implemented as a macro on lambda. Within the body of an expression like (let ((y 2)) (+ x y)), y will refer to a new variable.
Saving function calls. The third consequence of the inline insertion of macro expansions is that in compiled code there is no overhead associated with a macro call. By runtime, the macro call has been replaced by its expansion. (The same is true in principle of functions declared inline.)
...
What about those operators which could be written either way [i.e. as a function or a macro]?... Here are several points to consider when we face such choices:
THE PROS
Computation at compile-time. A macro call involves computation at two times: when the macro is expanded, and when the expansion is evaluated. All the macro expansion in a Lisp program is done when the program is compiled, and every bit of computation which can be done at compile-time is one bit that won’t slow the program down when it’s running. If an operator could be written to do some of its work in the macro expansion stage, it will be more efficient to make it a macro, because whatever work a smart compiler can’t do itself, a function has to do at runtime. Chapter 13 describes macros like avg which do some of their work during the expansion phase.
Integration with Lisp. Sometimes, using macros instead of functions will make a program more closely integrated with Lisp. Instead of writing a program to solve a certain problem, you may be able to use macros to transform the problem into one that Lisp already knows how to solve. This approach, when possible, will usually make programs both smaller and more efficient: smaller because Lisp is doing some of your work for you, and more efficient because production Lisp systems generally have had more of the fat sweated out of them than user programs. This advantage appears mostly in embedded languages, which are described starting in Chapter 19.
Saving function calls. A macro call is expanded right into the code where it appears. So if you write some frequently used piece of code as a macro, you can save a function call every time it’s used. In earlier dialects of Lisp, programmers took advantage of this property of macros to save function calls at runtime. In Common Lisp, this job is supposed to be taken over by functions declared inline.
By declaring a function to be inline, you ask for it to be compiled right into the calling code, just like a macro. However, there is a gap between theory and practice here; CLTL2 (p. 229) says that “a compiler is free to ignore this declaration,” and some Common Lisp compilers do. It may still be reasonable to use macros to save function calls, if you are compelled to use such a compiler...
THE CONS
Functions are data, while macros are more like instructions to the compiler. Functions can be passed as arguments (e.g. to apply), returned by functions, or stored in data structures. None of these things are possible with macros.
In some cases, you can get what you want by enclosing the macro call within a lambda-expression. This works, for example, if you want to apply or funcall certain macros:> (funcall #’(lambda (x y) (avg x y)) 1 3) --> 2. However, this is an inconvenience. It doesn’t always work, either: even if, like avg, the macro has an &rest parameter, there is no way to pass it a varying number of arguments.
Clarity of source code. Macro definitions can be harder to read than the equivalent function definitions. So if writing something as a macro would only make a program marginally better, it might be better to use a function instead.
Clarity at runtime. Macros are sometimes harder to debug than functions. If you get a runtime error in code which contains a lot of macro calls, the code you see in the backtrace could consist of the expansions of all those macro calls, and may bear little resemblance to the code you originally wrote.
And because macros disappear when expanded, they are not accountable at runtime. You can’t usually use trace to see how a macro is being called. If it worked at all, trace would show you the call to the macro’s expander function, not the macro call itself.
Recursion. Using recursion in macros is not so simple as it is in functions. Although the expansion function of a macro may be recursive, the expansion itself may not be. Section 10.4 deals with the subject of recursion in macros...
Having considered what can be done with macros, the next question to ask is: in what sorts of applications can we use them? The closest thing to a general description of macro use would be to say that they are used mainly for syntactic transformations. This is not to suggest that the scope for macros is restricted. Since Lisp programs are made from lists, which are Lisp data structures, “syntactic transformation” can go a long way indeed...
Macro applications form a continuum between small general-purpose macros like while, and the large, special-purpose macros defined in the later chapters. On one end are the utilities, the macros resembling those that every Lisp has built-in. They are usually small, general, and written in isolation. However, you can write utilities for specific classes of programs too, and when you have a collection of macros for use in, say, graphics programs, they begin to look like a programming language for graphics. At the far end of the continuum, macros allow you to write whole programs in a language distinctly different from Lisp. Macros used in this way are said to implement embedded languages.
Yes, the first rule is: don't use a macro where a function will do.
There are a few things you can't do with functions, for example conditional evaluation of code. Others become quite unwieldy.
In general I am aware of three recurring use cases for macros (which doesn't mean that there aren't any others):
Defining forms (e. g. defun, defmacro, define-frobble-twiddle)
These often have to take some code snippet, wrap it (e. g. in a lamdba form), and register it somewhere, maybe even multiple places. The users (programmers) should only concern themselves with the code snippet. This is thus mostly about removing boilerplate. Additionally, the macro can process the body, e. g. registering docstrings, handle declarations etc.
Example: Imagine that you are writing a sort of event mini-framework. Your event handlers are pure functions that take some input and produce an effect declaration (think re-frame from the Clojure world). You want these functions to be normal named functions so that you can just test them with the usual testing frameworks, but also register them in a lookup table for your event loop mechanism. You'd maybe want to have something like a define-handler macro:
(defvar *handlers* (make-hash-table)) ; internal for the framework
(defmacro define-handler (&whole whole name lambda-list &body body)
`(progn (defun ,#(rest whole))
(setf (gethash ,name *handlers*)
(lambda ,lambda-list ,#body)))) ; could also be #',name
Control constructs (e. g. case, cond, switch, some->)
These use conditional evaluation and convenient re-arrangement of the expression.
With- style wrappers
This is an idiom to provide unwind-protect functionality to some arbitrary resource. The difference to a general with construct (as in Clojure) is that the resource type can be anything, you don't have to reify it with something like a Closable interface.
Example:
(defmacro with-foo-bar-0 (&body body)
(let ((foo-bar (gensym "FOO-BAR")))
`(let (,foo-bar))
(shiftf ,foo-bar (aref (gethash :foo *buzz*) 0) 0)
(unwind-protect (progn ,#body)
(setf (aref (gethash :foo *buzz*) 0) ,foo-bar)))))
This sets something inside a nested data structure to 0, and ensures that it is reset to the value it had before on any, even non-local, exit.
[This is a much-reduced version of a longer, incomplete answer which I decided was not appropriate for SE.]
There are no cases where you must use a macro. Indeed, there are no cases where you must use a programming language at all: if you are happy to learn the order code for the machine you are using and competent with a keypunch then you can program that way.
Most of us are not happy doing that: we like to use programming languages. These have two obvious benefits and one less-obvious but far more important one. The two obvious benefits:
programming languages make programming easier;
programming languages make programs portable across machines.
The more important reason is that building languages is an enormously successful approach to problem solving for human beings. It's so successful that we do it all the time, without even thinking we are doing it. Every time we invent some new term for something we are in fact inventing a language; every time a mathematician invents some new bit of notation they are inventing a language. People like to sneer at these languages by calling them 'jargon', 'slang' or 'dialect' but, famously: a shprakh iz a dialekt mit an armey un flot (translated: a language is a dialect with an army and navy).
The same thing is true for programming languages as is true for natural languages, except that programming languages are designed to communicate both with other humans and with a machine, and the machine requires very precise instructions. This means that it can be rather hard to build programming languages, so people tend to stick with the languages they know.
Except that they don't: the approach of building a language to describe some problem is so powerful that people in fact do this anyway. But they don't know that they are doing it and they don't have the tools to do it so what they end up with tends to be a hideous monster stitched together from pieces of other things with the robustness and readability of custard. We've all dealt with such things. A common characteristic is 'language in a string' where one language appears within strings of another language, with constructs of this inner language being put together by string operations in the outer language. If you are really lucky this will go several levels deep (I have seen three).
These things are abominations, but they are still the best way of dealing with large problem areas. Well, they are the best way if you live in a world where constructing a new programming language is so hard that only special clever people can do it
But it's hard only because if your only tool is C then everything looks like a PDP-11. If instead we used a tool which made the incremental construction of programming languages easy by allowing them to be defined in terms of simpler versions of themselves in a lightweight way, then we could just construct whole families of programming languages in which to talk about various problems, each of which would simply be a point in the space of possible languages. And anyone could do this: it would be a little bit harder than just writing functions, because working out grammar rules is a little bit harder than thinking up new words, but it would not be a lot harder.
And that's what macros do: they let you define programming languages to talk about a particular problem area in a way which is extremely lightweight. One such language is Common Lisp, but it's just one starting point in the space of Lisp-family languages: a point from which you can build the language you actually want (and people, of course, will belittle these languages by calling them 'dialects': well, a programming language is only a dialect with a standards committee).
Functions let you add to the vocabulary of the language you are building. Macros let you add to the grammar of the language. Between them they let you define a new language in which to talk about the problem area you are interested in. And doing that is the whole point of programming in Lisp: Lisp is about building languages to talk about problem areas.
An soon as you are little familiar to macros, you will wonder why you ever had this question. :-)
Macros are in no way alternatives to functions and neither vice versa. It just seems to be so, if you are working on the REPL, because macro expansion, compilation and running is happening within the moment you are pressing [enter].
Macros are running at compile time, so any macro-processing is finished, as son as your definition runs. There is no way to "call" a macro at the runtime of the definition that involves this very macro.
Macros just calculate S-exprs, that will be passed to the compiler.
Just think of a macro as something, that is coding for you.
This is easier to understand with little more code in your editor than with small definitions the REPL. Good luck!

Is Racket (and Typed Racket) strongly or softly typed?

I realize the definitions of "strongly typed" and "softly typed" are loose and open to interpretation, but I have yet to find a clear definition in relation to untyped Racket (which from my understanding means dynamically typed) and Typed Racket on this.
Again, I'm sure its not so cut and dry, but at least I'd like to learn more about which direction leans in. The more research I've done of this the more confused I've gotten, so thank you in advance for the help!
One problem in answering questions like this is that people disagree about the meanings of nearly all of these terms. So... what follows is my opinion (though it is a fairly well-informed one, if I do say so myself).
All languages operate on some set of values, and have some runtime behavior. Trying to add a number to a function fails in nearly all languages. You can call this a "type system," but it's probably not the right term.
So what is a type system? These days, I claim that the term generally refers to a system that examines a program and statically[*] deduces properties of the program. Typically, if it's called a type system, this means attaching a "type" to each expression that constrains the set/class of values that the expression can evaluate to. Note that this definition basically makes the term "dynamically typed" meaningless.
Note the giant loophole: there's a "trivial type system", which simply assigns the "type" containing all program values to every expression. So, if you want to, you can consider literally any language to be statically typed. Or, if you prefer,
"unityped" (note the "i" in there).
Okay, down to brass tacks.
Racket is not typed. Or, if you prefer, "dynamically typed," or "unityped," or even "untyped".
Typed Racket is typed. It has a static type system that assigns to every expression a single type. Its type system is "sound", which means that evaluation
of the program will conform to the claims made by the type system: if Typed Racket
(henceforth TR) type-checks your program and assigns the type 'Natural' to an
expression, then it will definitely evaluate to a natural number (assuming no bugs
in the TR type checker or the Racket runtime system).
Typed Racket has many unusual characteristics that allow code written in TR to interoperate with code written in Racket. The most well-known of these is "occurrence typing", which allows a TR program to deal with types like (U Number String) (that is, a value that's either a number or a string) without exploding, as earlier similar type systems did.
That's kind of beside the point, though: your question was about Racket and TR, and the simple answer is that the basic Racket language does not have a static type system, and TR does.
[*] defining the term 'static' is outside the scope of this post :).
Strongly typed and weakly typed has nothing to do with static or dynamic typing. you can have a combination of them so that you have 4 variations. (strong/static, weak/static, strong/dynamic, weak/dynamic).
Scheme (and thus #lang racket) are dynamicaly and stronged typed.
> (string-append "test" 5)
string-append: contract violation
expected: string?
given: 5
argument position: 2nd
other arguments...:
All it's values have a type and the functions can demand a type. If you were to append a string to a number you get a type error. you need to explicitly cast the number to a string using number->string to satisfy the contract of all arguments being strings. With weakly typed languages, like JavaScript, it would just cast the number to a string so satisfy the function. Less code, but possibly more runtime bugs.
Since Scheme is strongly typed #lang typed/racket is definitely too.
While Scheme/#lang racket is dynamicly typed I'm not entirely sure if #lang typed/racket is completely static. The Guide calls it a gradually-typed language.
One of the definitions of "weakly typed" is that when there is a type mismatch between operands instead of giving an error the language will try its best to continue, by coercing the operands from one type to the other or giving a default result.
For example, in Perl a string containing digits will be coerced into a number if you use it in an arithmetic operation:
# This Perl program prints 6.
print 3 * "2a"
Under this definition, Racket would be categorized a dynamically typed (type errors occur at runtime) and strongly typed (it doesn't automatically convert values from one type to the other).
Since Typed Racket doesn't change the runtime semantics of Racket (except by introducing some extra contract checking) it would be just as strongly typed as regular Racket.
By the way, the usual words people use are weak and strong typing. Soft typing might refer to one specific kind of type system that was created during the 90s. It didn't turn out all that well, which is one of the reasons that people come up with the Gradual Typing system that is used in languages such as Typed Racket and Typescript.
Weakly typed language allows a legal implementation to set computer "on fire", in contrast, strongly typed language limits more buggy programs.
In spite of Racket is dynamically typed, it is strongly typed.

Want to distinguish between value and expression

I'd like to know how I can distinguish between 'value' and 'expression'.
In computer science, a value is an expression which cannot be
evaluated any further (a normal form).[1] The members of a type are
the values of that type.[1] For example, the expression 1 + 2 is not a
value as it can be reduced to the expression 3. This expression cannot
be reduced any further (and is a member of the type Nat) and therefore
is a value.
I found a statement above from the url below:
https://en.wikipedia.org/wiki/Value_(computer_science) 2
From this statement I felt like:
I think "value" look like the "atom" in chemistry based upon the
definition of Mitchell, John C.
But someone denied this:
But, even expressions can be (represented as) values. The classic case
being an s-expression in Lisp-like languages. – user2864740
This talk is in another thread : what-is-the-value-in-1st-class-value 3
It would have been so simple if user2864740 didn't say anything. But he said so and I am confused.
Could someone explain me about this situation? or the difference that might exist in lisp like languages?
Thank you in advance!
[1] Mitchell, John C. (1996). Foundations for Programming Languages. The MIT Press.
If you don't know Lisp, read SICP and play with some Scheme implementation.
(the classic SICP book is a must read -it is a very good introductory book about programming-, so even if you know Lisp but didn't read SICP you really should read it; and it is freely available on-line.)
I strongly recommend reading C.Queinnec's Lisp In Small Pieces book, which explains how LISP interpreters or compilers expressions are designed, so cover your question is great details.
(actually your question needs an entire book to be answered, and Queinnec's book is that book)
LISP is an homoiconic language, hence s-expressions are values (but several values are not expressions, in particular closures). But most programming languages -C, Ocaml, Javascript, C++, Java, etc...- are (sadly) not homoiconic: their AST is not a value and expressions cannot manipulate ASTs natively.
BTW, the wikipedia sentence
a value is an expression which cannot be evaluated any further
is not always correct. For example, for the C language, values and expressions are different kind of beasts.
You should also read something about formal semantics of programming languages.
Also, reading Scott's Programming Language Pragmatics will give you a broader view (thru several programming languages).
A value is a datum: the machine representation of some piece of information, such as a number or character string. The datum belongs to a type which has an associated domain: as set of all possible values of that type. The value is an element of that set.
An expression is a datum which represents syntax: usually a structured datum build as an aggregate (usually a tree structure) of other values. However, an individual non-aggregated value can also be an expression.
The purpose of an expression may be to denote the computation of a value; in that situation, ANSI Common Lisp refers to an expression as a form. Not all expressions are forms. For instance in (let ((a 42))), (a 42) is an expression denoting, in the context of let, the variable a and its initializing form 42, and ((a 42)) is an expression denoting the complete list of binding specifications under that let.
If a form is evaluated, and the result is a datum similar to that form itself, then one of the two is the case: the form might be a literal (a value which evaluates to itself if it is treated as an expression) or it might be a quine: a clever form which doesn't directly yield itself as a value (the way a literal does) but cleverly calculates an object which is structurally identical to itself.
A value is not defined as an expression which is irreducible and denotes itself; that is a literal constant. A literal constant denotes a value. Values, however, exist in all contexts, such as the run-time context in which syntax is no longer relevant. When a program is running, it can instantiate values which never exist as a piece of syntax. If we evaluate (+ 2 2), there is a 4 which never appeared in the syntax as the expression 4. Therefore we cannot say that the value 4 is an expression which is irreducible; the value exists even if no such expression does.

Free variables in Coq

Is there any function/command to get/check if a free variable, lets say n:U, exists in a term/expression e, using Coq? Please share.
For example, I want to state this "n does not occur in the free names of e" in Coq.
Thanks,
Wilayat
Let's assume you're talking about free variables in Coq terms:
Dealing with an elementary Coq proof (using nothing exterior), what you manipulate is, outside of the proof context, a closed term, i.e. a term with only bound variables. If in proof mode a term ein your goal appears to have a free variable n(meaning the variable n is somewhere in your proof context), you can simply make the binding explicit (and close the goal term) using the generalize tactic.
In more advanced cases, your less-vanilla proof may involve free variables in the form of assumptions or parameters, in which case you can list them using Print Assumptions.
If on the other hand you are talking about using Coq terms to represent the notion of term in a particular language (e.g. you are formalizing such a language), you simply have to give a special treatment to the free variables of your language.
Provided you give them a specific constructor in the inductive definition of your language's terms, it should be easy to state whether some term has free variables or not. If you're unfamiliar with the (not so trivial) notion of representing substitution & the free variables of a language, you'll find pointers at increasing levels of precision from the TAPL to B.C.Pierce's SF course to the results of the POPLMark Challenge.

Really minimum lisp

What is the minimum set of primitives required such that a language is Turing complete and a lisp variant?
Seems like car, cdr and some flow control and something for REPL is enough. It be nice if there is such list.
Assume there are only 3 types of data, integers, symbols and lists.(like in picolisp)
The lambda calculus is turing complete. It has one primitive - the lambda. Translating that to a lisp syntax is pretty trivial.
There's a good discussion of this in the Lisp FAQ. It depends on your choice of primitives. McCarthy's original "LISP 1.5 Programmer's Manual" did it with five functions: CAR, CDR, CONS, EQ, and ATOM.
I believe the minimum set is what John McCarthy published in the original paper.
The Roots of Lisp.
The code.
The best way to actually know this for sure is if you implement it. I used 3 summers to create Zozotez which is a McCarty-ish LISP running on Brainfuck.
I tried to find out what I needed and on a forum you'll find a thread that says You only need lambda. Thus, you can make a whole LISP in lambda calculus if you'd like. I found it interesting, but it's hardly the way to go if you want something that eventually has side effects and works in the real world.
For a Turing complete LISP I used Paul Grahams explanation of McCarthy's paper and all you really need is:
symbol-evaluation
special form quote
special form if (or cond)
special form lambda (similar to quote)
function eq
function atom
function cons
function car
function cdr
function-dispatch (basically apply but not actually exposed to the system so it handles a list where first element is a function)
Thats 10. In addition to this, to have a implementation that you can test and not just on a drawing board:
function read
function write
Thats 12. In my Zozotez I implemeted set and flambda (anonymous macroes, like lambda) as well. I could feed it a library implementing any dynamic bound lisp (Elisp, picoLisp) with the exception of file I/O (because the underlying BF does not support it other than stdin/stdout).
I recommend anyone to implement a LISP1-interpreter, in both LISP and (not LISP), to fully understand how a language is implemented. LISP has a very simple syntax so it's a good starting point. For all other programming languages how you implement an interpreter is very similar. Eg. in the SICP videos the wizards make an interpreter for a logical language, but the structure and how to implement it is very similar to a lisp interpreter even though this language is completely different than Lisp.