#lang racket
(module inside racket
(provide
(contract-out
[dummy (->i ([x (lambda (x) (begin (displayln 0) #t))]
[y (x) (lambda (y) (begin (displayln 1) #t))]
[z (x y) (lambda (z) (begin (displayln 2) #t))]
)
any
)]
)
)
(define (dummy x y z) #t)
)
(require 'inside)
(dummy 1 2 3)
The output is
0
0
1
1
2
#t
It's unclear to me why having x and y as dependencies would require the corresponding guard to fire again.
The doc of ->i http://docs.racket-lang.org/reference/function-contracts.html#%28form._%28%28lib.racket%2Fcontract%2Fbase..rkt%29.-~3ei%29%29 doesn't seem to mention this behavior.
Anyone can shed some light on this?
This was as confusing to me as it was to you, so I took the opportunity to ask this question on the Racket mailing list. What follows is an attempt to summarize what I found.
The ->i combinator produces a dependent contract that uses the indy blame semantics presented in the paper Correct Blame for Contracts. The key idea presented in the paper is that, with dependent contracts, there can actually be three parties that might need to be blamed for contract violations.
With normal function contracts, there are two potentially guilty parties. The first is the most obvious one, which is the caller. For example:
> (define/contract (foo x)
(integer? . -> . string?)
(number->string x))
> (foo "hello")
foo: contract violation
expected: integer?
given: "hello"
in: the 1st argument of
(-> integer? string?)
contract from: (function foo)
blaming: anonymous-module
(assuming the contract is correct)
The second potential guilty party is the function itself; that is, the implementation might not match the contract:
> (define/contract (bar x)
(integer? . -> . string?)
x)
> (bar 1)
bar: broke its own contract
promised: string?
produced: 1
in: the range of
(-> integer? string?)
contract from: (function bar)
blaming: (function bar)
(assuming the contract is correct)
Both of these cases are pretty obvious. However, the ->i contract introduces a third potential guilty party: the contract itself.
Since ->i contracts can execute arbitrary expressions at contract attachment time, it’s possible for them to violate themselves. Consider the following contract:
(->i ([mk-ctc (integer? . -> . contract?)])
[val (mk-ctc) (mk-ctc "hello")])
[result any/c])
This is a somewhat silly contract, but it’s easy to see that it’s a naughty one. It promises to only call mk-ctc with integers, but the dependent expression (mk-ctc "hello") calls it with a string! It would obviously be wrong to blame the calling function, since it has no control over the invalid contract, but it might also be wrong to blame the contracted function, since the contract could be defined in complete isolation from the function it is attached to.
For an illustration of this, consider a multi-module example:
#lang racket
(module m1 racket
(provide ctc)
(define ctc
(->i ([f (integer? . -> . integer?)]
[v (f) (λ (v) (> (f v) 0))])
[result any/c])))
(module m2 racket
(require (submod ".." m1))
(provide (contract-out [foo ctc]))
(define (foo f v)
(f #f)))
(require 'm2)
In this example, the ctc contract is defined in the m1 submodule, but the function that uses the contract is defined in a separate submodule, m2. There are two possible blame scenarios here:
The foo function is obviously invalid, since it applies f to #f, despite the contract specifying (integer? . -> . integer?) for that argument. You can see this in practice by calling foo:
> (foo add1 0)
foo: broke its own contract
promised: integer?
produced: #f
in: the 1st argument of
the f argument of
(->i
((f (-> integer? integer?))
(v (f) (λ (v) (> (f v) 0))))
(result any/c))
contract from: (anonymous-module m2)
blaming: (anonymous-module m2)
(assuming the contract is correct)
I’ve emphasized the spot in the contract error that includes blame information, and you can see that it blames m2, which makes sense. This isn’t the interesting case, since it’s the second potential blame party mentioned in the non-dependent case.
However, the ctc contract is actually a little bit wrong! Note that the contract on v applies f to v, but it never checks that v is an integer. For this reason, if v is something else, f will be called in an invalid way.1 You can see this behavior by giving a non-integral value for v:
> (foo add1 "hello")
foo: broke its own contract
promised: integer?
produced: "hello"
in: the 1st argument of
the f argument of
(->i
((f (-> integer? integer?))
(v (f) (λ (v) (> (f v) 0))))
(result any/c))
contract from: (anonymous-module m1)
blaming: (anonymous-module m1)
(assuming the contract is correct)
The top of the contract error is the same (Racket provides the same “broke its own contract” message for these kinds of contract violations), but the blame information is different! It now blames m1, which is the actual source of the contract. This is the indy blame party.
This distinction is what means the contracts have to be applied twice. It applies them with each distinct blame party’s information: first it applies them with the contract-blame, then it applies them with the function-blame.
Technically, this could be avoided for flat contracts, since flat contracts will never signal a contract violation after the initial contract attachment process. However, the ->i combinator currently does not implement any such optimization, since it probably wouldn’t have a significant impact on performance, and the contract implementation is already fairly complex (though if someone wanted to implement it, it would likely be accepted).
In general, though, contracts are expected to be stateless and idempotent (flat contracts are expected to be simple predicates), so there’s not really any guarantee that this won’t happen, and ->i just uses that to implement its fine-grained blame information.
1. As it turns out, the ->d contract combinator doesn’t catch this issue at all, so add1 ends up raising a contract violation here. This is why ->i was created, and it’s why ->i is favored over ->d.
Related
I would like to have a clear understanding, what is 'Atom' in LISP?
Due to lispworks, 'atom - any object that is not a cons.'.
But this definition is not enough clear for me.
For example, in the code below:
(cadr
(caddar (cddddr L)))
Is 'L' an atom? On the one hand, L is not an atom, because it is cons, because it is the list (if we are talking about object, which is associated with the symbol L).
On the other hand, if we are talking about 'L' itself (not about its content, but about the symbol 'L'), it is an atom, because it is not a cons.
I've tried to call function 'atom',
(atom L) => NIL
(atom `L) => T
but still I have no clue... Please, help!
So the final question: in the code above, 'L' is an atom, or not?
P.S. I'm asking this question due to LISP course at my university, where we have a definition of 'simple expression' - it is an expression, which is atom or function call of one or two atomic parameters. Therefore I wonder if expression (cddddr L) is simple, which depends on whether 'L' is atomic parameter or not.
Your Lisp course's private definition of "simple expression" is almost certainly rooted purely in syntax. The idea of "atomic parameter" means that it's not a compound expression. It probably has nothing to do with the run-time value!
Thus, I'm guessing, these are simple expressions:
(+ 1 2)
42
"abc"
whereas these are not:
(+ 1 (* 3 4)) ;; (* 3 4) is not an atomic parameter
(+ a b c) ;; parameters atomic, but more than two
(foo) ;; not simple: fewer than one parameter, not "one or two"
In light of the last counterexample, it would probably behoove them to revise their definition.
On the one hand, L is not an atom, because it is cons, because it is the list (if we are talking about object, which is associated with the symbol L).
You are talking here about the meaning of the code being executed, its semantics. L here stands for a value, which is a list in your tests. At runtime you can inspect values and ask about their types.
On the other hand, if we are talking about 'L' itself (not about its content, but about the symbol 'L'), it is an atom, because it is not a cons.
Here you are looking at the types of the values that make up the syntax of your code, how it is being represented before even being evaluated (or compiled). You are manipulating a tree of symbols, one of them being L. In the source code, this is a symbol. It has no meaning by itself other than being a name.
Code is data
Lisp makes it easy to represent source code using values in the language itself, and easy to manipulate fragments of code at one point to build code that is executed later. This is often called homoiconicity, thought it is somewhat a touchy word because people don't always think the definition is precise enough to be useful. Another saying is "code is data", something that most language designers and programmers will agree to be true.
Lisp code can be built at runtime as follows (> is the prompt of the REPL, what follows is the result of evaluation):
> (list 'defun 'foo (list 'l) (list 'car 'l))
(DEFUN FOO (L) (CAR L))
The resulting form happens to be valid Common Lisp code, not just a generic list of values. If you evaluate it with (eval *), you will define a function named FOO that takes the first element of some list L.
NB. In Common Lisp the asterisk * is bound in the REPL to the last value being successfully returned.
Usually you don't build code like that, the Lisp reader turns a stream of characters into such a tree. For example:
> (read-from-string "(defun foo (l) (car l))")
(DEFUN FOO (L) (CAR L))
But the reader is called also implicitly in the REPL (that's the R in the acronym).
In such a tree of symbols, L is a symbol.
Evaluation model
When you call function FOO after it has been defined, you are evaluating the body of FOO in a context where L is bound to some value. And the rule for evaluating a symbol is to lookup the value it is bound to, and return that. This is the semantics of the code, which the runtime implements for you.
If you are using a simple interpreter, maybe the symbol L is present somewhere at runtime and its binding is looked up. Usually the code is not interpreted like that, it is possible to analyze it and transform it in an efficient way, during compilation. The result of compilation probably does not manipulate symbols anymore here, it just manipulates CPU registers and memory.
In all cases, asking for the type of L at this point is by definition of the semantics just asking the type for whatever value is bound to L in the context it appears.
An atom is anything that is not a cons cell
Really, the definition of atom is no more complex than that. A value in the language that is not a cons-cell is called an atom. This encompasses numbers, strings, everything.
Sometimes you evaluate a tree of symbols that happens to be code, but then the same rule applies.
Simple expressions
P.S. I'm asking this question due to LISP course at my university, where we have a definition of 'simple expression' - it is an expression, which is atom or function call of one or two atomic parameters. Therefore I wonder if expression (cddddr L) is simple, which depends on whether 'L' is atomic parameter or not.
In that course you are writing functions that analyze code. You are given a Lisp value and must decide if it is a simple expression or not.
You are not interested in any particular interpretation of the value being given, at no point you are going to traverse the value, see a a symbol and try to resolve it to a value: you are checking if the syntax is a valid simple expression or not.
Note also that the definitions in your course might be a bit different than the one from any particular exising flavor (this is not necessarily Common Lisp, or Scheme, but a toy LISP dialect). Follow in priority the definitions from your course.
Imagine we have a predicate which tells us if an object is not a number:
(not-number-p 3) -> NIL
(not-number-p "string") -> T
(let ((foo "another string))
(not-number-p foo)) -> T
(not-number '(1 2 3)) -> T
(not-number (first '(1 2 3)) -> NIL
We can define that as:
(defun not-number-p (object)
(not (numberp object))
Above is just the opposite of NUMBERP.
NUMBERP -> T if object is a number
NOT-NUMBER-P -> NIL if object is a number
Now imagine we have a predicate NOT-CONS-P, which tells us if an object is not a CONS cell.
(not-cons-p '(1 . 2)) -> NIL
(let ((c '(1 . 2)))
(not-cons-p c)) -> NIL
(not-cons-p 3) -> T
(let ((n 4))
(not-cons-p n)) -> T
(not-cons-p NIL) -> T
(not-cons-p 'NIL) -> T
(not-cons-p 'a-symbol) -> T
(not-cons-p #\space) -> T
The function NOT-CONS-P can be defined as:
(defun not-cons-p (object)
(if (consp object)
NIL
T))
Or shorter:
(defun not-cons-p (object)
(not (consp object))
The function NOT-CONS-P is traditionally called ATOM in Lisp.
In Common Lisp every object which is not a cons cell is called an atom. The function ATOM is a predicate.
See the Common Lisp HyperSpec: Function Atom
Your question:
(cadr (caddar (cddddr L)))
Is 'L' an atom?
How would we know that? L is a variable. What is the value of L?
(let ((L 10))
(atom L)) -> T
(let ((L (cons 1 2)))
(atom L) -> NIL
(atom l) answers this question:
-> is the value of L an atom
(atom l) does not answer this question:
-> is L an atom? L is a variable and in a function call the value of L is passed to the function ATOM.
If you want to ask if the symbol L is an atom, then you need to quote the symbol:
(atom 'L) -> T
(atom (quote L)) -> T
symbols are atoms. Actually everything is an atom, with the exception of cons cells.
I'm in a process of implementing Hygienic macros in my Scheme implementation, I've just implemented syntax-rules, but I have this code:
(define odd?
(syntax-rules ()
((_ x) (not (even? x)))))
what should be the difference between that and this:
(define-syntax odd?
(syntax-rules ()
((_ x) (not (even? x)))))
from what I understand syntax-rules just return syntax transformer, why you can't just use define to assign that to symbol? Why I need to use define-syntax? What extra stuff that expression do?
Should first also work in scheme? Or only the second one?
Also what is the difference between let vs let-syntax and letrec vs letrec-syntax. Should (define|let|letrec)-syntax just typecheck if the value is syntax transformer?
EDIT:
I have this implementation, still using lisp macros:
;; -----------------------------------------------------------------------------
(define-macro (let-syntax vars . body)
`(let ,vars
,#(map (lambda (rule)
`(typecheck "let-syntax" ,(car rule) "syntax"))
vars)
,#body))
;; -----------------------------------------------------------------------------
(define-macro (letrec-syntax vars . body)
`(letrec ,vars
,#(map (lambda (rule)
`(typecheck "letrec-syntax" ,(car rule) "syntax"))
vars)
,#body))
;; -----------------------------------------------------------------------------
(define-macro (define-syntax name expr)
(let ((expr-name (gensym)))
`(define ,name
(let ((,expr-name ,expr))
(typecheck "define-syntax" ,expr-name "syntax")
,expr-name))))
This this code correct?
Should this code works?
(let ((let (lambda (x) x)))
(let-syntax ((odd? (syntax-rules ()
((_ x) (not (even? x))))))
(odd? 11)))
This question seems to imply some deep confusion about macros.
Let's imagine a language where syntax-rules returns some syntax transformer function (I am not sure this has to be true in RnRS Scheme, it is true in Racket I think), and where let and let-syntax were the same.
So let's write this function:
(define (f v)
(let ([g v])
(g e (i 10)
(if (= i 0)
i
(e (- i 1))))))
Which we can turn into this, of course:
(define (f v n)
(v e (i n)
(if (<= i 0)
i
(e (- i 1)))))
And I will tell you in addition that there is no binding for e or i in the environment.
What is the interpreter meant to do with this definition? Could it compile it? Could it safely infer that i can't possibly make any sense since it is used as a function and then as a number? Can it safely do anything at all?
The answer is that no, it can't. Until it knows what the argument to the function is it can't do anything. And this means that each time f is called it has to make that decision again. In particular, v might be:
(syntax-rules ()
[(_ name (var init) form ...)
(letrec ([name (λ (var)
form ...)])
(name init))]))
Under which the definition of f does make some kind of sense.
And things get worse: much worse. How about this?
(define (f v1 v2 n)
(let ([v v1])
(v e (i n)
...
(set! v (if (eq? v v1) v2 v1))
...)))
What this means is that a system like this wouldn't know what the code it was meant to interpret meant until, the moment it was interpreting it, or even after that point, as you can see from the second function above.
So instead of this horror, Lisps do something sane: they divide the process of evaluating bits of code into phases where each phase happens, conceptually, before the next one.
Here's a sequence for some imagined Lisp (this is kind of close to what CL does, since most of my knowledge is of that, but it is not intended to represent any particular system):
there's a phase where the code is turned from some sequence of characters to some object, possibly with the assistance of user-defined code;
there's a phase where that object is rewritten into some other object by user- and system-defined code (macros) – the result of this phase is something which is expressed in terms of functions and some small number of primitive special things, traditionally called 'special forms' which are known to the processes of stage 3 and 4;
there may be a phase where the object from phase 2 is compiled, and that phase may involve another set of user-defined macros (compiler macros);
there is a phase where the resulting code is evaluated.
And for each unit of code these phases happen in order, each phase completes before the next one begins.
This means that each phase in which the user can intervene needs its own set of defining and binding forms: it needs to be possible to say that 'this thing controls what happens at phase 2' for instance.
That's what define-syntax, let-syntax &c do: they say that 'these bindings and definitions control what happens at phase 2'. You can't, for instance, use define or let to do that, because at phase 2, these operations don't yet have meaning: they gain meaning (possibly by themselves being macros which expand to some primitive thing) only at phase 3. At phase 2 they are just bits of syntax which the macro is ingesting and spitting out.
If Racket's match macro were a function I could do this:
(define my-clauses (list '[(list '+ x y) (list '+ y x)]
'[_ 42]))
(on-user-input
(λ (user-input)
(define expr (get-form-from-user-input user-input)) ; expr could be '(+ 1 2), for example.
(apply match expr my-clauses)))
I think there are two very different ways to do this. One is to move my-clauses into macro world, and make a macro something like this (doesn't work):
(define my-clauses (list '[(list '+ x y) (list '+ y x)]
'[_ 42]))
(define-syntax-rule (match-clauses expr)
(match expr my-clauses)) ; this is not the way it's done.
; "Macros that work together" discusses this ideas, right? I'll be reading that today.
(on-user-input
(λ (user-input)
(define expr (get-form-from-user-input user-input)) ; expr could be '(+ 1 2), for example.
(match-clauses expr)))
The alternative, which might be better in the end because it would allow me to change my-clauses at runtime, would be to somehow perform the pattern matching at runtime. Is there any way I can use match on runtime values?
In this question Ryan Culpepper says
It's not possible to create a function where the formal parameters and body are given as run-time values (S-expressions) without using eval.
So I guess I'd have to use eval, but the naive way won't work because match is a macro
(eval `(match ,expr ,#my-clauses) (current-namespace))
I got the desired result with the following voodoo from the guide
(define my-clauses '([(list'+ x y) (list '+ y x)]
[_ 42]))
(define-namespace-anchor a)
(define ns (namespace-anchor->namespace a))
(eval `(match '(+ 1 2) ,#my-clauses) ns) ; '(+ 2 1)
Is the pattern matching happening at runtime now? Is it a bad idea?
To answer the first part of your question (assuming you don't necessarily need the match clauses to be supplied at runtime):
The key is to:
Define my-clauses for compile time ("for syntax").
Reference that correctly in the macro template.
So:
(begin-for-syntax
(define my-clauses (list '[(list '+ x y) (list '+ y x)]
'[_ 42])))
(define-syntax (match-clauses stx)
(syntax-case stx ()
[(_ expr) #`(match expr #,#my-clauses)]))
The pattern matching is happening at runtime in the last example.
One way to check is to look at the expansion:
> (syntax->datum
(expand '(eval `(match '(+ 1 2) ,#my-clauses) ns)))
'(#%app eval (#%app list* 'match ''(+ 1 2) my-clauses) ns)
Whether is a good idea...
Using eval is rather slow, so if you call it often it might be better to find another solution. If you haven't seen it already you might want to read "On eval in dynamic languages generally and in Racket specifically." on the Racket blog.
Thank you both very much, your answers gave me much food for thought. What I am trying to do is still not very well defined, but I seem to be learning a lot in the process, so that's good.
The original idea was to make an equation editor that is a hybrid between paredit and a computer algebra system. You enter an initial math s-expression, e.g. (+ x (* 2 y) (^ (- y x) 2). After that the program presents you with a list of step transformations that you would normally make by hand: substitute a variable, distribute, factor, etc. Like a CAS, but one step at a time. Performing a transformation would happen when the user presses the corresponding key combination, although one possibility is to just show a bunch of possible results, and let the user choose the new state of the expression amongst them. For UI charterm will do for now.
At first I thought I would have the transformations be clauses in a match expression, but now I think I'll make them functions that take and return s-expressions. The trouble with choosing compile time vs runtime is that I want the user to be able to add more transformations, and choose his own keybindings. That could mean that they write some code which I require, or they require mine, before the application is compiled, so it doesn't force me to use eval. But it may be best if I give the user a REPL so he has programmatic control of the expression and his interactions with it as well.
Anyway, today I got caught up reading about macros, evaluation contexts and phases. I'm liking racket more and more and I'm still to investigate about making languages... I will switch to tinkering mode now and see if I get some basic form of what I'm describing to work before my head explodes with new ideas.
I'm at the early stages of designing a framework and am fooling around with typed/racket. Suppose I have the following types:
(define-type Calculate-with-one-number (-> Number Number))
(define-type Calculate-with-two-numbers (-> Number Number Number))
And I want to have a function that dispatches on type:
(: dispatcher (-> (U Calculate-with-one-number Calculate-with-two-numbers) Number))
(define (dispatcher f args)
(cond [(Calculate-with-one-number? f)
(do-something args)]
[(Calculate-with-two-numbers? f)
(do-something-else args)]
[else 42]))
How do I create the type-predicates Calculate-with-one-number? and Calculate-with-two-numbers? in Typed/Racket? For non-function predicates I can use define-predicate. But it's not clear how to implement predicates for function types.
Since I am self answering, I'm taking the liberty to clarify the gist of my question in light of the discussion of arity as a solution. The difference in arity was due to my not considering its implications when specifying the question.
The Problem
In #lang typed/racket as in many Lisps, functions, or more properly: procedures, are first class dataypes.
By default, #lang racket types procedures by arity and any additional specificity in argument types must be done by contract. In #lang typed/racket procedures are typed both by arity and by the types of their arguments and return values due to the language's "baked-in contracts".
Math as an example
The Typed Racket Guide provides an example using define-type to define a procedure type:
(define-type NN (-> Number Number))
This allows specifying a procedure more succinctly:
;; Takes two numbers, returns a number
(define-type 2NN (-> Number Number Number))
(: trigFunction1 2NN)
(define (trigFunction1 x s)
(* s (cos x)))
(: quadraticFunction1 2NN)
(define (quadraticFunction1 x b)
(let ((x1 x))
(+ b (* x1 x1))))
The Goal
In a domain like mathematics, it would be nice to work with more abstract procedure types because knowing that a function is cyclical between upper and lower bounds (like cos) versus having only one bound (e.g. our quadratic function) versus asymptotic (e.g. a hyperbolic function) provides for clearer reasoning about the problem domain. It would be nice to have access to abstractions something like:
(define-type Cyclic2NN (-> Number Number Number))
(define-type SingleBound2NN (-> Number Number Number))
(: trigFunction1 Cyclic2NN)
(define (trigFunction1 x s)
(* s (cos x)))
(: quadraticFunction1 SingleBound2NN)
(define (quadraticFunction1 x b)
(let ((x1 x))
(+ b (* x1 x1))))
(: playTone (-> Cyclic2NN))
(define (playTone waveform)
...)
(: rabbitsOnFarmGraph (-> SingleBound2NN)
(define (rabbitsOnFarmGraph populationSize)
...)
Alas, define-type does not deliver this level of granularity when it comes to procedures. Even moreover, the brief false hope that we might easily wring such type differentiation for procedures manually using define-predicate is dashed by:
Evaluates to a predicate for the type t, with the type (Any -> Boolean : t). t may not contain function types, or types that may refer to mutable data such as (Vectorof Integer).
Fundamentally, types have uses beyond static checking and contracts. As first class members of the language, we want to be able to dispatch our finer grained procedure types. Conceptually, what is needed are predicates along the lines of Cyclic2NN? and SingleBound2NN?. Having only arity for dispatch using case-lambda just isn't enough.
Guidance from Untyped Racket
Fortunately, Lisps are domain specific languages for writing Lisps once we peal back the curtain to reveal the wizard, and in the end we can get what we want. The key is to come at the issue the other way and ask "How canwe use the predicates typed/racket gives us for procedures?"
Structures are Racket's user defined data types and are the basis for extending it's type system. Structures are so powerful that even in the class based object system, "classes and objects are implemented in terms of structure types."
In #lang racket structures can be applied as procedures giving the #:property keyword using prop:procedure followed by a procedure for it's value. The documentation provides two examples:
The first example specifies a field of the structure to be applied as a procedure. Obviously, at least once it has been pointed out, that field must hold a value that evaluates to a procedure.
> ;; #lang racket
> (struct annotated-proc (base note)
#:property prop:procedure
(struct-field-index base))
> (define plus1 (annotated-proc
(lambda (x) (+ x 1))
"adds 1 to its argument"))
> (procedure? plus1)
#t
> (annotated-proc? plus1)
#t
> (plus1 10)
11
> (annotated-proc-note plus1)
"adds 1 to its argument"
In the second example an anonymous procedure [lambda] is provided directly as part of the property value. The lambda takes an operand in the first position which is resolved to the value of the structure being used as a procedure. This allows accessing any value stored in any field of the structure including those which evaluate to procedures.
> ;; #lang racket
> (struct greeter (name)
#:property prop:procedure
(lambda (self other)
(string-append
"Hi " other
", I'm " (greeter-name self))))
> (define joe-greet (greeter "Joe"))
> (greeter-name joe-greet)
"Joe"
> (joe-greet "Mary")
"Hi Mary, I'm Joe"
> (joe-greet "John")
"Hi John, I'm Joe
Applying it to typed/racket
Alas, neither syntax works with struct as implemented in typed/racket. The problem it seems is that the static type checker as currently implemented cannot both define the structure and resolve its signature as a procedure at the same time. The right information does not appear to be available at the right phase when using typed/racket's struct special form.
To get around this, typed/racket provides define-struct/exec which roughly corresponds to the second syntactic form from #lang racket less the keyword argument and property definition:
(define-struct/exec name-spec ([f : t] ...) [e : proc-t])
name-spec = name
| (name parent)
Like define-struct, but defines a procedural structure. The procdure e is used as the value for prop:procedure, and must have type proc-t.
Not only does it give us strongly typed procedural forms, it's a bit more elegant than the keyword syntax found in #lang racket. Example code to resolve the question as restated here in this answer is:
#lang typed/racket
(define-type 2NN (-> Number Number Number))
(define-struct/exec Cyclic2NN
((f : 2NN))
((lambda(self x s)
((Cyclic2NN-f self) x s))
: (-> Cyclic2NN Number Number Number)))
(define-struct/exec SingleBound2NN
((f : 2NN))
((lambda(self x s)
((SingleBound2NN-f self) x s))
: (-> SingleBound2NN Number Number Number)))
(define trigFunction1
(Cyclic2NN
(lambda(x s)
(* s (cos x)))))
(define quadraticFunction1
(SingleBound2NN
(lambda (x b)
(let ((x1 x))
(+ b (* x1 x1)))))
The defined procedures are strongly typed in the sense that:
> (SingleBound2NN? trigFunction1)
- : Boolean
#f
> (SingleBound2NN? quadraticFunction1)
- : Boolean
#t
All that remains is writing a macro to simplify specification.
In the general case, what you want is impossible due to how types are implemented in Racket. Racket has contracts, which are run-time wrappers that guard parts of a program from other parts. A function contract is a wrapper that treats the function as a black box - a contract of the form (-> number? number?) can wrap any function and the new wrapper function first checks that it receives one number? and then passes it to the wrapped function, then checks that the wrapped function returns a number?. This is all done dynamically, every single time the function is called. Typed Racket adds a notion of types that are statically checked, but since it can provide and require values to and from untyped modules, those values are guarded with contracts that represent their type.
In your function dispatcher, you accept a function f dynamically, at run time and then want to do something based on what kind of function you got. But functions are black boxes - contracts don't actually know anything about the functions they wrap, they just check that they behave properly. There's no way to tell if dispatcher was given a function of the form (-> number? number?) or a function of the form (-> string? string?). Since dispatcher can accept any possible function, the functions are black boxes with no information about what they accept or promise. dispatcher can only assume the function is correct with a contract and try to use it. This is also why define-type doesn't make a predicate automatically for function types - there's no way to prove a function has the type dynamically, you can only wrap it in a contract and assume it behaves.
The exception to this is arity information - all functions know how many arguments they accept. The procedure-arity function will give you this information. So while you can't dispatch on function types at run-time in general, you can dispatch on function arity. This is what case-lambda does - it makes a function that dispatches based on the number of arguments it receives:
(: dispatcher (case-> [-> Calculate-with-one-number Number Void]
[-> Calculate-with-two-numbers Number Number Void]))
(define dispatcher
(case-lambda
[([f : Calculate-with-one-number]
[arg : Number])
(do-something arg)]
[([f : Calculate-with-two-numbers]
[arg1 : Number]
[arg2 : Number])
(do-something-else arg1 arg2)]
[else 42]))
When I define a function in Common Lisp like this:
(defun foo (n)
(declare (type fixnum n))
(+ n 42))
I expected a call like (foo "a") to fail right away but it instead fail at the call to +. Is the declare form not guarantees static type checking?
Type declarations are traditionally meant to be used as guarantees to the compiler for optimization purposes. For type checking, use check-type (but note that it, too, does the checking at run-time, not at compile-time):
(defun foo (n)
(check-type n fixnum)
(+ n 42))
That said, different Common Lisp implementations interpret type declarations differently. SBCL, for example, will treat them as types to be checked if the safety policy setting is high enough.
In addition, if you want static checking, SBCL is probably your best bet as well, since its type inference engine warns you about any inconsistencies it encounters. To that end, ftype declarations can be put to good use:
CL-USER(1): (declaim (ftype (function (string) string) bar))
CL-USER(2): (defun foo (n)
(declare (type fixnum n))
(bar n))
; in: DEFUN FOO
; (BAR N)
;
; caught WARNING:
; Derived type of N is
; (VALUES FIXNUM &OPTIONAL),
; conflicting with its asserted type
; STRING.
; See also:
; The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
; caught 1 WARNING condition
FOO
Declarations are just hints to the compiler so it can produce more efficient code. In other words, it is not static checking.