In SubL (aka sub-lisp) what function may one use to determine is one class is a subclass of another?
I know that e.g. (genls #$Automobile) will return a list of concepts like #$RoadVehicle #$WheeledTransportationDevice but is there some sort of boolean function I can call that given two classes tells me if one is the subclass of another?
I've tried (genls-p #$Automobile #$RoadVehicle) in e.g. the SubL interactor and get "GENLS-P is not fboundp."
I suppose David Whitten is technically correct, i.e. you can roll your own function called genls-p. However, please be aware that there is already a function in SubL that does what you want genls-p to do (and it is likely much faster than a hand-rolled function).
That function is called "genls?".
Here are some examples:
If you put....
(genls? #$Automobile #$RoadVehicle)
...into some SubL interpreter (e.g. the SubL Interactor on the GUI), it will return...
T
....In other words, if you ask Cyc "Is automobile a subclass of the road-vehicle?" it will answer T meaning true, i.e. "yes".
Likewise, if you put something like...
(genls? #$Automobile #$BaseKB)
...into a SubL interpreter, it will return...
NIL
...In other words if you ask it "Is automobile a subclass of the BaseKB, i.e. the most general context that makes the weakest assumptions about the universe", then Cyc will answer NIL, i.e. False, i.e. "No".
Note that microtheories can sometimes cause confusing results. Consider the following illustrative examples:
(genls? #$Ghost #$SupernaturalBeing) ==> NIL
However, if you ask this question within a context with appropriate assumptions beliefs about the world you will get not NIL but T as the result. E.g.
(with-mt #$WorldMythologyMt (genls? #$Ghost #$SupernaturalBeing)) ==> T
...Whereas in a microtheory that is less superstitious, more scientific such as #$LinnaeanTaxonomyPhysiologyMt you will get NIL, not T as the result...
(with-mt #$LinnaeanTaxonomyPhysiologyMt (genls? #$Ghost #$SupernaturalBeing)) ==> NIL
...and if you ask this in that most general, weakest assumption microtheory known as BaseKB you will also get NIL....
(with-mt #$BaseKB (genls? #$Ghost #$SupernaturalBeing)) ==>
...Sometimes you will want to ignore the complexities of microtheories and collapse across microtheories. I think this is one way to do that...
(with-all-mts (#$genls? #$Ghost #$SupernaturalBeing)) ==> T
...although be forewarned that you may get self-contradictory results. E.g. If you had...
"The earth is a flat object" in a 'flat earth beliefs microtheory'
..and...
"The earth is a round object" in a 'general scientific consensus microtheory'
...you might get Cyc to return a self-contradictory answer that the earth was both a flat and a round object. In most practical applications. you can just get away with not worrying about such contradictions and thus with-all-mts is an okay bet.
I hope I haven't confused you.
To recap the most important point, if you want to achieve the kind of functionality you desire this SubL expression will serve you well...
(genls? #$Automobile RoadVehicle)
the message that genls-p is not fboundp tells you that you can create a function that is indeed a "f"-unctional "bound" "p"-redicate thusly:
(define genls-p (a b) (ret (pif (member b (genls a)) T nil))
so you can then use the function the way you expect:
CYC(167): (genls-p #$Automobile #$RoadVehicle)
[Time: 0.0 secs]
T
CYC(168): (genls-p #$BaseKB #$RoadVehicle)
[Time: 0.0 secs]
NIL
David Whitten
whitten#netcom.com
713-870-3834
Related
In speech and writing, I keep wanting to refer to the data inside a monad, but I don't know what to call it.
For example, in Scala, the argument to the function passed to flatMap gets bound to…er…that thing inside the monad. In:
List(1, 2, 3).flatMap(x => List(x, x))
x gets bound to that thing I don't have a word for.
Complicating things a bit, the argument passed to the Kleisli arrow doesn't necessarily get bound to all the data inside the monad. With List, Set, Stream, and lots of other monads, flatMap calls the Kleisli arrow many times, binding x to a different piece of the data inside the monad each time. Or maybe not even to "data", so long as the monad laws are followed. Whatever it is, it's wrapped inside the monad, and flatMap passes it to you without the wrapper, perhaps one piece at a time. I just want to know what to call the relevant inside-the-monad stuff that x refers to, at least in part, so I can stop mit all this fumbly language.
Is there a standard or conventional term for this thing/data/value/stuff/whatever-it-is?
If not, how about "the candy"?
Trying to say "x gets bound to" is setting you up for failure. Let me explain, and guide you towards a better way of expressing yourself when talking about these sorts of things.
Suppose we have:
someList.flatMap(x => some_expression)
If we know that someList has type List[Int], then we can safely say that inside of some_expression, x is bound to a value of type Int. Notice the caveat, "inside of some_expression". This is because, given someList = List(1,2,3), x will take on the values of each of them: 1, 2, and 3, in turn.
Consider a more generalized example:
someMonadicValue.flatMap(x => some_expression)
If we know nothing about someMonadicValue, then we don't know much about how some_expression is going to be invoked. It may be run once, or three times (as in the above example), or lazily, or asynchronously, or it may be scheduled for once someMonadicValue is finished (e.g. futures), or it may never be used (e.g. empty list, None). The Monad interface does not include reasoning about when or how someExpression will be used. So all you can say about what x will be is confined to the context of some_expression, whenever and however some_expression happens to be evaluated.
So back to the example.
someMonadicValue.flatMap(x => some_expression)
You are trying to say "x is the ??? of someMonadicValue." And you are looking for the word that accurately replaces ???. Well I'm here to tell you that you're doing it wrong. If you want to speak about x, then either do it
Within the context of some_expression. In this case, use the bolded phrase I gave you above: "inside of some_expression, x is bound to a value of type Foo." Or, alternatively, you can speak about x...
With additional knowledge about which monad you're dealing with.
In case #2, for example, for someList.flatMap(x => some_expression), you could say that "x is each element of someList." For someFuture.flatMap(x => some_expression), you could say that "x is the successful future value of someFuture, if it indeed ever completes and succeeds."
You see, that's the beauty of Monads. That ??? that you are trying to describe, is the thing which the Monad interface abstracts over. Now do you see why it's so difficult to give ??? a name? It's because it has a different name and a different meaning for each particular monad. And that's the point of having the Monad abstraction: to unify these differing concepts under the same computational interface.
Disclaimer: I'm definitely not an expert in functional programming terminology and I expect that the following will not be an answer to your question from your point of view. To me the problem rather is: If choosing a term requires expert knowledge, so does understanding.
Choosing an appropriate term largely depends on:
your desired level of linguistic correctness, and
your audience, and the corresponding connotations of certain terms.
Regarding the linguistic correctness the question is whether you properly want to refer to the values/data that are bound to x, or whether you can live with a certain (incorrect) abstraction. In terms of the audience, I would mainly differentiate between an audience with a solid background in functional programming and an audience coming from other programming paradigms. In case of the former, choosing the term is probably not entirely crucial, since the concept itself is familiar, and many terms would lead to the right association. The discussion in the comments already contains some very good suggestions for this case. However, the discussion also shows that you need a certain background in functional programming to see the rationale behind some terms.
For an audience without a background in functional programming, I would rather sacrifice linguistic correctness in favor of comprehensibility. In such a situation I often just refer to it as "underlying type", just to avoid any confusion that I would probably create by trying to refer to the "thing(s) in the monad" itself. Obviously, it is literally wrong to say "x is bound to the underlying type". However, it is more important to me that my audience understands a concept at all. Since most programmers are familiar with containers and their underlying types, I'm aiming for the (flawed) association "underlying type" => "the thing(s) that are in a container" => "the thing(s) inside a monad", which often seems to work.
TL;DR: There always is a trade-off between correctness and accessibility. And when it comes to functional programming, it is sometimes helpful to shift the bias towards the latter.
flatMap does not call the Kleisli arrow many times. And "that thing" is not "inside" the monad.
flatMap lifts a Kleisli arrow to the monad. You could see this as the construction of an arrow M[A] => M[B] between types (A, B) lifted to the monad (M[A], M[B]), given a Kleisli arrow A => M[B].
So x in x => f(x) is the value being lifted.
What data?
data Monady a = Monady
Values of monads are values of monads, their wrapped type may be entirely a fiction. Which is to say that talking about it as if it exists may cause you pain.
What you want to talk about are the continuations like Monad m => a -> m b as they are guaranteed to exist. The funny stuff occurs in how (>>=) uses those continuations.
'Item' seems good? 'Item parameter' if you need to be more specific.
Either the source data can contain multiple items, or the operation can call it multiple times. Element would tend to be more specific for the first case, Value is singular as to the source & not sensible for list usages, but Item covers all cases correctly.
Disclaimer: I know more about understandable English than about FP.
Take a step back here.
A monad isn't just the functor m that wraps a value a. A monad is the stack of endofunctors (i.e., the compositions of m's), together with the join operator. That's where the famous quip -- that a monad is a monoid in the category of endofunctors, what's the problem? -- comes from.
(The whole story is that the quip means the composition of m's is another m, as witnessed by join)
A thing with the type (m a) is typically called a monad action. You can call a the result of the action.
This is a problem that's been nagging at me for some time, and I wonder if anyone here can help.
I have a PLT Redex model of a language called lambdaLVar that is more or less a garden-variety untyped lambda calculus, but extended with a store containing "lattice variables", or LVars. An LVar is a variable whose value can only increase over time, where the meaning of "increase" is given by a partially ordered set (aka a lattice) that the user of the language specifies. Therefore lambdaLVar is really a family of languages -- instantiate it with one lattice and you get one language; with a different lattice, and you get another. You can take a look at the code here; the important stuff is in lambdaLVar.rkt.
In the on-paper definition of lambdaLVar, the language definition is parameterized by that user-specified lattice. For a long time, I've wanted to do the same kind of parameterization in the Redex model, but so far, I haven't been able to figure out how. Part of the trouble is that the grammar of the language depends on how the user instantiates the lattice: elements of the lattice become terminals in the grammar. I don't know how to express a grammar in Redex that is abstract over the lattice.
In the meantime, I tried to make lambdaLVar.rkt as modular as I could. The language defined in that file is specialized to a particular lattice: natural numbers with max as the least-upper-bound (lub) operation. (Or, equivalently, natural numbers ordered by <=. It's a very boring lattice.) The only parts of the code that are specific to that lattice are the line (define lub-op max) near the top, and natural appearing in the grammar. (There's a lub metafunction that is defined in terms of the user-specified lub-op function. The latter is just a Racket function, so lub has to escape out to Racket to call lub-op.)
Barring the ability to actually specify lambdaLVar in a way that is abstract over the choice of lattice, it seems like I ought to be able to write a version of lambdaLVar with the most bare-bones of lattices -- just Bot and Top elements, where Bot <= Top -- and then use define-extended-language to add more stuff. For instance, I could define a language called lambdaLVar-nats that is specialized to the naturals lattice I described:
;; Grammar for elements of a lattice of natural numbers.
(define-extended-language lambdaLVar-nats
lambdaLVar
(StoreVal .... ;; Extend the original language
natural))
;; All we have to specify is the lub operation; leq is implicitly <=
(define-metafunction/extension lub lambdaLVar-nats
lub-nats : d d -> d
[(lub-nats d_1 d_2) ,(max (term d_1) (term d_2))])
Then, to replace the two reduction relations slow-rr and fast-rr that I had for lambdaLVar, I could define a couple of wrappers:
(define nats-slow-rr
(extend-reduction-relation slow-rr
lambdaLVar-nats))
(define nats-fast-rr
(extend-reduction-relation fast-rr
lambdaLVar-nats))
My understanding from the documentation on extend-reduction-relation is that it should reinterpret the rules in slow-rr and fast-rr, but using lambdaLVar-nats. Putting all this together, I tried running the test suite that I had with one of the new, extended reduction relations:
> (program-test-suite nats-slow-rr)
The first thing I get is a contract violation complaint: small-step-base: input (((l 3)) new) at position 1 does not match its contract. The contract line of small-step-base is just #:contract (small-step-base Config Config), where Config is a grammar nonterminal that has a new meaning if reinterpreted under lambdaLVar-nats than it did under lambdaLVar, because of the specific lattice stuff. As an experiment, I got rid of the contracts onsmall-step-base and small-step-slow.
I was then able to actually run my 19 test programs, but 10 of them fail. Perhaps unsurprisingly, all the ones that fail are programs that use natural-number-valued LVars in some way. (The rest are "pure" programs that don't interact with the store of LVars at all.) So, the tests that fail are exactly the ones that use the extended grammar.
So I kept following the rabbit hole, and it seems like Redex wants me to extend all of the existing judgment forms and metafunctions to be associated with lambdaLVar-nats rather than lambdaLVar. That makes sense, and it seems to work OK for judgment forms as far as I can tell, but with metafunctions I get into trouble: I want the new metafunction to overload the old one of the same name (because existing judgment forms are using it) and there doesn't seem to be a way to do that. If I have to rename the metafunctions, it defeats the purpose, because I'll have to write whole new judgment forms anyway. I suppose that what I want is a sort of late binding of metafunction calls!
My question in a nutshell: Is there any way in Redex to parameterize the definition of a language in the way I want, or to extend the definition of a language in a way that will do what I want? Will I end up just having to write Redex-generating macros?
Thanks for reading!
I asked the Racket users mailing list; the thread begins here. To summarize the resulting discussion: In Redex as it stands today, the answer is no, there is no way to parameterize a language definition in the way I want. However, it should be possible in a future version of Redex with a module system, which is in the works right now.
It also doesn't work to try to use Redex's existing extension forms (define-extended-language, extend-reduction-relation, and so on) in the way I tried to do here, because -- as I discovered -- the original metafunctions do not get transitively reinterpreted to use the extended languages. But a module system would apparently help with this, too, because it would allow you to package up metafunctions, judgment-forms, and reduction relations together and simultaneously extend them (see the discussion here).
So, for now, the answer is, indeed, to write a Redex-generating macro. Something like this works:
(define-syntax-rule (define-lambdaLVar-language name lub-op lattice-values ...)
(begin
;; Entire original Redex model goes here, with `natural` replaced with
;; `lattice-values ...`, and instances of `...` replaced with `(... ...)`
))
And then you can instantiate particular lattices with, e.g.,:
(define-lambdaLVar-language lambdaLVar-nat max natural)
I hope Redex does get modules soon, but in the meantime, this seems to work well.
defrecord in clojure allows for defining simple data containers with custom fields.
e.g.
user=> (defrecord Book [author title ISBN])
user.Book
The minimal constructor that results takes only positional arguments with no additional functionality such as defaulting of fields, field validation etc.
user=> (Book. "J.R.R Tolkien" "The Lord of the Rings" 9780618517657)
#:user.Book{:author "J.R.R Tolkien", :title "The Lord of the Rings", :ISBN 9780618517657}
It is always possible to write functions wrapping the default constructor to get more complex construction semantics - using keyword arguments, supplying defaults and so on.
This seems like the ideal scenario for a macro to provide expanded semantics. What macros have people written and/or recommend for richer defrecord construction?
Examples of support for full and partial record constructor functions and support for eval-able print and pprint forms:
http://david-mcneil.com/post/765563763/enhanced-clojure-records
http://github.com/david-mcneil/defrecord2
David is a colleague of mine and we are using this defrecord2 extensively in our project. I think something like this should really be part of Clojure core (details might vary considerably of course).
The things we've found to be important are:
Ability to construct a record with named (possibly partial) parameters: (new-foo {:a 1})
Ability to construct a record by copying an existing record and making modifications: (new-foo old-foo {:a 10})
Field validation - if you pass a field outside the declared record fields, throw an error. Of course, this is actually legal and potentially useful, so there are ways to make it optional. Since it would be rare in our usage, it's far more likely to be an error.
Default values - these would be very useful but we haven't implemented it. Chas Emerick has written about adding support for default values here: http://cemerick.com/2010/08/02/defrecord-slot-defaults/
Print and pprint support - we find it very useful to have records print and pprint in a form that is eval-able back to the original record. For example, this allows you to run a test, swipe the actual output, verify it, and use it as the expected output. Or to swipe output from a debug trace and get a real eval-able form.
Here is one that defines a record with default values and invariants. It creates a ctor that can take keyword args to set the values of the fields.
(defconstrainedrecord Foo [a 1 b 2]
[(every? number? [a b])])
(new-Foo)
;=> #user.Foo{:a 1, :b 2}
(new-Foo :a 42)
; #user.Foo{:a 42, :b 2}
And like I said... invariants:
(new-Foo :a "bad")
; AssertionError
But they only make sense in the context of Trammel.
Here is one approach: http://david-mcneil.com/post/765563763/enhanced-clojure-records
I'm following a tutorial. (Real World Haskell)
And I have one beginner question about head and tail called on empty lists: In GHCi it returns exception.
Intuitively I think I would say they both should return an empty list. Could you correct me ? Why not ? (as far as I remember in OzML left or right of an empty list returns nil)
I surely have not yet covered this topic in the tutorial, but isnt it a source of bugs (if providing no arguments)?
I mean if ever passing to a function a list of arguments which may be optionnal, reading them with head may lead to a bug ?
I just know the GHCi behaviour, I don't know what happens when compiled.
Intuitively I think would say they both should return an empty list. Could you correct me ? Why not ?
Well - head is [a] -> a. It returns the single, first element; no list.
And when there is no first element like in an empty list? Well what to return? You can't create a value of type a from nothing, so all that remains is undefined - an error.
And tail? Tail basically is a list without its first element - i.e. one item shorter than the original one. You can't uphold these laws when there is no first element.
When you take one apple out of a box, you can't have the same box (what happened when tail [] == []). The behaviour has to be undefined too.
This leads to the following conclusion:
I surely have not yet covered this topic in the tutorial, but isnt it a source of bugs ? I mean if ever passing to a function a list of arguments which may be optionnal, reading them with head may lead to a bug ?
Yes, it is a source of bugs, but because it allows to write flawed code. Code that's basically trying to read a value that doesn't exist. So: *Don't ever use head/tail** - Use pattern matching.
sum [] = 0
sum (x:xs) = x + sum xs
The compiler can guarantee that all possible cases are covered, values are always defined and it's much cleaner to read.
I was checking a friend's code, and this pattern showed up quite a bit whenever he wrote functions that returned a boolean value:
def multiple_of_three(n):
if (n % 3) is 0:
return True
else:
return False
I maintain that it's simpler (and perhaps a bit faster) to write:
def multiple_of_three(n):
return (n % 3) is 0
Am I correct that the second implementation is faster? Also, is it any less readable or somehow frowned upon?
I very much doubt there is a compiler or interpreter still out there where there is a significant speed difference - most will generate exactly the same code in both situations. But your "direct return" method is, in my opinion, clearer and more maintainable.
I can't speak of the Python interpreter's exact behavior, but doing one way over another like that (in any language) simply for reasons of "faster" is misguided, and qualifies as "premature optimization". As Paul Tomblin stated in another answer, the difference in speed, if any, is quite negligible. Common practice does dictate, however, that the second form in this case is more readable. If an expression is implicitly boolean, then the if statement wrapper is frivolous.
See also http://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
The second form is the preferred form. In my experience the first form is usually the sign of an inexperienced programmer (and this does not apply solely to Python - this crops up in most languages).