Is there an option to enable overloading of user-defined symbols in cvc4 for SMT input? - smt

In versions through 1.2 of the SMT-LIB language, overloading of user-defined symbols was allowed. Since version 2.0 of standard, overloading is restricted to theory symbol.
Nevertheless, some SMT-solvers still allow overloading of user-defined symbols and that happens to be handy for my use case: proof obligations are easily generated automatically with overloading, not so much without... I would like to add cvc4 to my portfolio of SMT solvers, but I found out that it produces a parsing error on overloaded user-symbols.
I am aware that this is the correct way to be compliant with the SMT-LIB standard, but I would like to know the following: Is there an option to CVC4 that disables such check and where the parser is able to disambiguate overloaded user symbols?

Unfortunately, CVC4 does not have an option to support overloaded user symbols. Each user symbol must be unique.

Related

Is Racket (and Typed Racket) strongly or softly typed?

I realize the definitions of "strongly typed" and "softly typed" are loose and open to interpretation, but I have yet to find a clear definition in relation to untyped Racket (which from my understanding means dynamically typed) and Typed Racket on this.
Again, I'm sure its not so cut and dry, but at least I'd like to learn more about which direction leans in. The more research I've done of this the more confused I've gotten, so thank you in advance for the help!
One problem in answering questions like this is that people disagree about the meanings of nearly all of these terms. So... what follows is my opinion (though it is a fairly well-informed one, if I do say so myself).
All languages operate on some set of values, and have some runtime behavior. Trying to add a number to a function fails in nearly all languages. You can call this a "type system," but it's probably not the right term.
So what is a type system? These days, I claim that the term generally refers to a system that examines a program and statically[*] deduces properties of the program. Typically, if it's called a type system, this means attaching a "type" to each expression that constrains the set/class of values that the expression can evaluate to. Note that this definition basically makes the term "dynamically typed" meaningless.
Note the giant loophole: there's a "trivial type system", which simply assigns the "type" containing all program values to every expression. So, if you want to, you can consider literally any language to be statically typed. Or, if you prefer,
"unityped" (note the "i" in there).
Okay, down to brass tacks.
Racket is not typed. Or, if you prefer, "dynamically typed," or "unityped," or even "untyped".
Typed Racket is typed. It has a static type system that assigns to every expression a single type. Its type system is "sound", which means that evaluation
of the program will conform to the claims made by the type system: if Typed Racket
(henceforth TR) type-checks your program and assigns the type 'Natural' to an
expression, then it will definitely evaluate to a natural number (assuming no bugs
in the TR type checker or the Racket runtime system).
Typed Racket has many unusual characteristics that allow code written in TR to interoperate with code written in Racket. The most well-known of these is "occurrence typing", which allows a TR program to deal with types like (U Number String) (that is, a value that's either a number or a string) without exploding, as earlier similar type systems did.
That's kind of beside the point, though: your question was about Racket and TR, and the simple answer is that the basic Racket language does not have a static type system, and TR does.
[*] defining the term 'static' is outside the scope of this post :).
Strongly typed and weakly typed has nothing to do with static or dynamic typing. you can have a combination of them so that you have 4 variations. (strong/static, weak/static, strong/dynamic, weak/dynamic).
Scheme (and thus #lang racket) are dynamicaly and stronged typed.
> (string-append "test" 5)
string-append: contract violation
expected: string?
given: 5
argument position: 2nd
other arguments...:
All it's values have a type and the functions can demand a type. If you were to append a string to a number you get a type error. you need to explicitly cast the number to a string using number->string to satisfy the contract of all arguments being strings. With weakly typed languages, like JavaScript, it would just cast the number to a string so satisfy the function. Less code, but possibly more runtime bugs.
Since Scheme is strongly typed #lang typed/racket is definitely too.
While Scheme/#lang racket is dynamicly typed I'm not entirely sure if #lang typed/racket is completely static. The Guide calls it a gradually-typed language.
One of the definitions of "weakly typed" is that when there is a type mismatch between operands instead of giving an error the language will try its best to continue, by coercing the operands from one type to the other or giving a default result.
For example, in Perl a string containing digits will be coerced into a number if you use it in an arithmetic operation:
# This Perl program prints 6.
print 3 * "2a"
Under this definition, Racket would be categorized a dynamically typed (type errors occur at runtime) and strongly typed (it doesn't automatically convert values from one type to the other).
Since Typed Racket doesn't change the runtime semantics of Racket (except by introducing some extra contract checking) it would be just as strongly typed as regular Racket.
By the way, the usual words people use are weak and strong typing. Soft typing might refer to one specific kind of type system that was created during the 90s. It didn't turn out all that well, which is one of the reasons that people come up with the Gradual Typing system that is used in languages such as Typed Racket and Typescript.
Weakly typed language allows a legal implementation to set computer "on fire", in contrast, strongly typed language limits more buggy programs.
In spite of Racket is dynamically typed, it is strongly typed.

gcc precompiler directive __attribute__ ((__cleanup__)) vs ((cleanup)) (with vs without underscores?)

I'm learning about gcc's cleanup attribute, and learning how it calls a function to be run when a variable goes out of scope, and I don't understand why you can use the word "cleanup" with or without underscores. Where is the documentation for, or documentation of, the version with underscores?
The gcc documentation above shows it like this:
__attribute__ ((cleanup(cleanup_function)))
However, most code samples I read, show it like this:
__attribute__ ((__cleanup__(cleanup_function)))
Ex:
http://echorand.me/site/notes/articles/c_cleanup/cleanup_attribute_c.html
http://www.nongnu.org/avr-libc/user-manual/atomic_8h_source.html
Note that the first example link states they are identical, and of course coding it proves this, but how did he know this originally? Where did this come from?
Why the difference? Where is __cleanup__ defined or documented, as opposed to cleanup?
My fundamental problem lies in the fact that I don't know what I don't know, therefore I am trying to expose some of my unknown unknowns so they become known unknowns, until I can study them and make them known knowns.
My thinking is that perhaps there is some globally-applied principle to gcc preprocessor directives, where you can arbitrarily add underscores before or after any of them? -- Or perhaps only some of them? -- Or perhaps it modifies the preprocessor directive or attribute somehow and there are cases where one method, with or without the extra underscores, is preferred over the other?
You are allowed to define a macro cleanup, as it is not a name that is reserved to the compiler. You are not allowed to define one named __cleanup__. This guarantees that your code using __cleanup__ is unaffected by other code (provided that other code behaves, of course).
As https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html#Attribute-Syntax explains:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
(But note that attributes are not preprocessor directives.)

Invalid character stream macros

The following preprocessor macro:
#define _VARIANT_BOOL /##/
is not actually valid C; roughly speaking, the reason is that the preprocessor is defined as working on a stream of tokens, whereas the above assumes that it works on a stream of characters.
On the other hand, unfortunately the above actually occurs in a Microsoft header file, so I have to handle it anyway. (I'm working on a preprocessor implementation.)
What other cases have people encountered in the wild, be it in legacy code however old as long as that code may be still in use, of preprocessor macros that are not actually valid, but work anyway because they were written under compilers that use a character oriented preprocessor implementation?
(Rationale: I'm trying to get some idea in advance how many special cases I'm going to have to hack, if I write a proper clean standard-conforming token oriented implementation.)
The relevant part of the standard (ยง6.10.3.3 The ## operator) says:
If the result is not a valid preprocessing token, the behavior is undefined.
This means that your preprocessor can do anything it likes and still be standard conforming, including emulating the common behaviour.
I think you can still have a "token-based" implementation and support this behaviour, by specifying that when the result of the ## operator is not a valid preprocessing token, the result is the two operand tokens unchanged. You may also want to have your preprocessor emit a warning about the invalid code.

What is the purpose of the Emacs function (eval-and-compile...)?

I can read the documentation, so I'm not asking for a cut-and-paste of that.
I'm trying to understand the motivation for this function.
When would I want to use it?
The documentation in the Emacs lisp manual does have some example situations that seem to answer your question (as opposed to the doc string).
From looking at the Emacs source code, eval-and-compile is used to quiet the compiler, to make macros/functions available during compilation (and evaluation), or to make feature/version specific variants of macros/functions available during compilation.
One usage I found helpful to see was in ezimage.el. In there, an if statement was put inside the eval-and-compile to conditionally define macros depending on whether the package was compiled/eval'ed in Emacs or XEmacs, and additionally whether a particular feature was present. By wrapping that conditional inside the eval-and-compile you enable the appropriate macro usage during compilation. A similar situation can be found in mwheel.el.
Similarly, if you want to define a function via fset and have it available during compilation, you need to have the call to fset wrapped with eval-and-compile because otherwise the symbol -> function association isn't available until the file is evaluated (because compilation of a call to fset just optimizes the assignment, it doesn't actually do the assignment). Why would you want this assignment during compilation? To quiet the compiler. Note: this is just my re-wording of what is in the elisp documentation.
I did notice a lot of uses in Emacs code which just wrapped calls to require, which sounds redundant when you read the documentation. I'm at a loss as to how to explain those.

How does Lisp let you redefine the language itself?

I've heard that Lisp lets you redefine the language itself, and I have tried to research it, but there is no clear explanation anywhere. Does anyone have a simple example?
Lisp users refer to Lisp as the programmable programming language. It is used for symbolic computing - computing with symbols.
Macros are only one way to exploit the symbolic computing paradigm. The broader vision is that Lisp provides easy ways to describe symbolic expressions: mathematical terms, logic expressions, iteration statements, rules, constraint descriptions and more. Macros (transformations of Lisp source forms) are just one application of symbolic computing.
There are certain aspects to that: If you ask about 'redefining' the language, then redefine strictly would mean redefine some existing language mechanism (syntax, semantics, pragmatics). But there is also extension, embedding, removing of language features.
In the Lisp tradition there have been many attempts to provide these features. A Lisp dialect and a certain implementation may offer only a subset of them.
A few ways to redefine/change/extend functionality as provided by major Common Lisp implementations:
s-expression syntax. The syntax of s-expressions is not fixed. The reader (the function READ) uses so-called read tables to specify functions that will be executed when a character is read. One can modify and create read tables. This allows you for example to change the syntax of lists, symbols or other data objects. One can also introduce new syntax for new or existing data types (like hash-tables). It is also possible to replace the s-expression syntax completely and use a different parsing mechanism. If the new parser returns Lisp forms, there is no change needed for the Interpreter or Compiler. A typical example is a read macro that can read infix expressions. Within such a read macro, infix expressions and precedence rules for operators are being used. Read macros are different from ordinary macros: read macros work on the character level of the Lisp data syntax.
replacing functions. The top-level functions are bound to symbols. The user can change the this binding. Most implementations have a mechanism to allow this even for many built-in functions. If you want to provide an alternative to the built-in function ROOM, you could replace its definition. Some implementations will raise an error and then offer the option to continue with the change. Sometimes it is needed to unlock a package. This means that functions in general can be replaced with new definitions. There are limitations to that. One is that the compiler may inline functions in code. To see an effect then one needs to recompile the code that uses the changed code.
advising functions. Often one wants to add some behavior to functions. This is called 'advising' in the Lisp world. Many Common Lisp implementations will provide such a facility.
custom packages. Packages group the symbols in name spaces. The COMMON-LISP package is the home of all symbols that are part of the ANSI Common Lisp standard. The programmer can create new packages and import existing symbols. So you could use in your programs an EXTENDED-COMMON-LISP package that provides more or different facilities. Just by adding (IN-PACKAGE "EXTENDED-COMMON-LISP") you can start to develop using your own extended version of Common Lisp. Depending on the used namespace, the Lisp dialect you use may look slighty or even radically different. In Genera on the Lisp Machine there are several Lisp dialects side by side this way: ZetaLisp, CLtL1, ANSI Common Lisp and Symbolics Common Lisp.
CLOS and dynamic objects. The Common Lisp Object System comes with change built-in. The Meta-Object Protocol extends these capabilities. CLOS itself can be extended/redefined in CLOS. You want different inheritance. Write a method. You want different ways to store instances. Write a method. Slots should have more information. Provide a class for that. CLOS itself is designed such that it is able to implement a whole 'region' of different object-oriented programming languages. Typical examples are adding things like prototypes, integration with foreign object systems (like Objective C), adding persistance, ...
Lisp forms. The interpretation of Lisp forms can be redefined with macros. A macro can parse the source code it encloses and change it. There are various ways to control the transformation process. Complex macros use a code walker, which understands the syntax of Lisp forms and can apply transformations. Macros can be trivial, but can also get very complex like the LOOP or ITERATE macros. Other typical examples are macros for embedded SQL and embedded HTML generation. Macros can also used to move computation to compile time. Since the compiler is itself a Lisp program, arbitrary computation can be done during compilation. For example a Lisp macro could compute an optimized version of a formula if certain parameters are known during compilation.
Symbols. Common Lisp provides symbol macros. Symbol macros allow to change the meaning of symbols in source code. A typical example is this: (with-slots (foo) bar (+ foo 17)) Here the symbol FOO in the source enclosed with WITH-SLOTS will be replaced with a call (slot-value bar 'foo).
optimizations, with so-called compiler macros one can provide more efficient versions of some functionality. The compiler will use those compiler macros. This is an effective way for the user to program optimizations.
Condition Handling - handle conditions that result from using the programming language in a certain way. Common Lisp provides an advanced way to handle errors. The condition system can also be used to redefine language features. For example one could handle undefined function errors with a self-written autoload mechanism. Instead of landing in the debugger when an undefined function is seen by Lisp, the error handler could try to autoload the function and retry the operation after loading the necessary code.
Special variables - inject variable bindings into existing code. Many Lisp dialects, like Common Lisp, provide special/dynamic variables. Their value is looked up at runtime on the stack. This allows enclosing code to add variable bindings that influence existing code without changing it. A typical example is a variable like *standard-output*. One can rebind the variable and all output using this variable during the dynamic scope of the new binding will go to a new direction. Richard Stallman argued that this was very important for him that it was made default in Emacs Lisp (even though Stallman knew about lexical binding in Scheme and Common Lisp).
Lisp has these and more facilities, because it has been used to implement a lot of different languages and programming paradigms. A typical example is an embedded implementation of a logic language, say, Prolog. Lisp allows to describe Prolog terms with s-expressions and with a special compiler, the Prolog terms can be compiled to Lisp code. Sometimes the usual Prolog syntax is needed, then a parser will parse the typical Prolog terms into Lisp forms, which then will be compiled. Other examples for embedded languages are rule-based languages, mathematical expressions, SQL terms, inline Lisp assembler, HTML, XML and many more.
I'm going to pipe in that Scheme is different from Common Lisp when it comes to defining new syntax. It allows you to define templates using define-syntax which get applied to your source code wherever they are used. They look just like functions, only they run at compile time and transform the AST.
Here's an example of how let can be defined in terms of lambda. The line with let is the pattern to be matched, and the line with lambda is the resulting code template.
(define-syntax let
(syntax-rules ()
[(let ([var expr] ...) body1 body2 ...)
((lambda (var ...) body1 body2 ...) expr ...)]))
Note that this is NOTHING like textual substitution. You can actually redefine lambda and the above definition for let will still work, because it is using the definition of lambda in the environment where let was defined. Basically, it's powerful like macros but clean like functions.
Macros are the usual reason for saying this. The idea is that because code is just a data structure (a tree, more or less), you can write programs to generate this data structure. Everything you know about writing programs that generate and manipulate data structures, therefore, adds to your ability to code expressively.
Macros aren't quite a complete redefinition of the language, at least as far as I know (I'm actually a Schemer; I could be wrong), because there is a restriction. A macro can only take a single subtree of your code, and generate a single subtree to replace it. Therefore you can't write whole-program-transforming macros, as cool as that would be.
However, macros as they stand can still do a whole lot of stuff - definitely more than any other language will let you do. And if you're using static compilation, it wouldn't be hard at all to do a whole-program transformation, so the restriction is less of a big deal then.
A reference to 'structure and interpretation of computer programs' chapter 4-5 is what I was missing from the answers (link).
These chapters guide you in building a Lisp evaluator in Lisp. I like the read because not only does it show how to redefine Lisp in a new evaluator, but also let you learn about the specifications of Lisp programming language.
This answer is specifically concerning Common Lisp (CL hereafter), although parts of the answer may be applicable to other languages in the lisp family.
Since CL uses S-expressions and (mostly) looks like a sequence of function applications, there's no obvious difference between built-ins and user code. The main difference is that "things the language provides" is available in a specific package within the coding environment.
With a bit of care, it is not hard to code replacements and use those instead.
Now, the "normal" reader (the part that reads source code and turns it into internal notation) expects the source code to be in a rather specific format (parenthesised S-expressions) but as the reader is driven by something called "read-tables" and these can be created and modified by the developer, it is also possible to change how the source code is supposed to look.
These two things should at least provide some rationale as to why Common Lisp can be considered a re-programmable programming language. I don't have a simple example at hand, but I do have a partial implementation of a translation of Common Lisp to Swedish (created for April 1st, a few years back).
From the outside, looking in...
I always thought it was because Lisp provided, at its core, such basic, atomic logical operators that any logical process can be built (and has been built and provided as toolsets and add-ins) from the basic components.
It is not so much that it can redefine itself as that its basic definition is so malleable that it can take any form and that no form is assumed/presumed into the structure.
As a metaphor, if you only have organic compounds you do organic chemistry, if you only have metal oxides you do metallurgy but if you have only elements you can do everything but you have extra initial steps to complete....most of which others have already done for you....
I think.....
Cool example at http://www.cs.colorado.edu/~ralex/papers/PDF/X-expressions.pdf
reader macros define X-expressions to coexist with S-expressions, e.g.,
? (cx <circle cx="62" cy="135" r="20"/>)
62
plain vanilla Common Lisp at http://www.AgentSheets.com/lisp/XMLisp/XMLisp.lisp
...
(eval-when (:compile-toplevel :load-toplevel :execute)
(when (and (not (boundp '*Non-XMLISP-Readtable*)) (get-macro-character #\<))
(warn "~%XMLisp: The current *readtable* already contains a #/< reader function: ~A" (get-macro-character #\<))))
... of course the XML parser is not so simple but hooking it into the lisp reader is.