I would like to ask if it is possible to create a db in chicken scheme; something analogous to this:
http://www.gigamonkeys.com/book/practical-a-simple-database.html
If it is then what predicates do i have to read/search for? Should i use an egg?
In the chicken wiki i have made search but have not found what i search. Is it just impossible to implement something like the above in scheme or it's done in a comletely different way?
It's possible, but you'll need to use another datatype.
Unlike Common Lisp (which that book focuses on), Schemes don't have plists since they lack the :keyword package. You'll need to decide how to store your data, and that decision will affect how you have to construct your make- and select equivalents. For example, if you decide that alists are a good enough substitute, then getting a property from one of your records is going to look like
(cdr (assoc foo record))
rather than
(getf :foo record)
Related
I'd like to compare the code contents of two syntax objects and ignore things like contexts. Is converting them to datum the only way to do so? Like:
(equal? (syntax->datum #'(x+1)) (syntax->datum #'(x+1)))
If you want to compare both objects without deconstructing them at all, then yes.
HOWEVER, the problem with this method is that it only compares the datum attached to two syntax objects, and won't actually compare their binding information.
The analogy that I've heard (from Ryan Culpepper), is this is kind of like taking two paintings, draining of them of their color, and seeing if they are identical. While they might be similar in some ways, you will miss a lot of differences from the different colors.
A better approach (although it does require some work), is to use syntax-e to destruct the syntax object into more primitive lists of syntax objects, and do this until you get identifiers (basically a syntax object whose datum is a symbol), from there, you can generally use free-identifier=? (and sometimes bound-identifier=? to see if each identifier can bind each other, and identifier-binding to compare module level identifiers.
The reason why there isn't a single simple predicate to compare two arbitrary syntax objects is because, generally, there isn't really one good definition for what makes two pieces of code equal, even if you only care about syntactic equality. For example, using the functions referenced above doesn't track internal bindings in a syntax object, so you will still get a very strict definition of what it means to be 'equal'. that is, both syntax objects have the same structure with identifiers that are either bound to the same module, or are free-identifier=?.
As such, before you use this answer, I highly recommend you take a step back and make sure this is really what you want to do. Once in a blue moon it is, but most of the time you actually are trying to solve a similar, yet simpler, problem.
Here's a concrete example of one possible way you could do the "better approach" Leif Andersen mentioned.
I have used this in multiple places for testing purposes, though if anyone wanted to use it in non-test code, they would probably want to re-visit some of the design decisions.
However, things like the equal?/recur pattern used here should be helpful no matter how you decide to define what equality means.
Some of the decisions you might want to make different choices on:
On identifiers, do you want to check that the scopes are exactly the same (bound-identifier=?), or would you want to assume that they would be bound outside of the syntax object and check that they are bound to the same thing, even if they have different scopes (free-identifier=?)? Note that if you choose the first one, then checking the results of macro expansion will sometimes return #false because of scope differences, but if you choose the second one, then if any identifier is not bound outside of the syntax object, then it would be as if you only care about symbol=? equality on names, so it will return #true in some places where it shouldn't. I chose the first one bound-identifier=? here because for testing, a "false positive" where the test fails is better than a "false negative" where the tests succeeds in cases it shouldn't.
On source locations, do you want to check that they are equal, or do you want to ignore them? This code ignores them because it's only for testing purposes, but if you want equality only for things which have the same source location, you might want to check that using functions like build-source-location-list.
On syntax properties, do you want to check that they are equal, or do you want to ignore them? This code ignores them because it's only for testing purposes, but if you want to check that you might want to use functions like syntax-property-symbol-keys.
Finally here is the code. It may not be exactly what you want depending on how you answered the questions above. However, its structure and how it uses equal?/recur might be helpful to you.
(require rackunit)
;; Works on fully wrapped, non-wrapped, and partially
;; wrapped values, and it checks that the inputs
;; are wrapped in all the same places. It checks scopes,
;; but it does not check source location.
(define-binary-check (check-stx=? stx=? actual expected))
;; Stx Stx -> Bool
(define (stx=? a b)
(cond
[(and (identifier? a) (identifier? b))
(bound-identifier=? a b)]
[(and (syntax? a) (syntax? b))
(and (bound-identifier=? (datum->syntax a '||) (datum->syntax b '||))
(stx=? (syntax-e a) (syntax-e b)))]
[else
(equal?/recur a b stx=?)]))
I am implementing an interpreter that codegen to another language using Racket. As a novice I'm trying to avoid macros to the extent that I can ;) Hence I came up with the following "interpreter":
(define op (open-output-bytes))
(define (interpret arg)
(define r
(syntax-case arg (if)
[(if a b) #'(fprintf op "if (~a) {~a}" a b)]))
; other cases here
(eval r))
This looks a bit clumsy to me. Is there a "best practice" for doing this? Am I doing a totally crazy thing here?
Short answer: yes, this is a reasonable thing to do. The way in which you do it is going to depend a lot on the specifics of your situation, though.
You're absolutely right to observe that generating programs as strings is an error-prone and fragile way to do it. Avoiding this, though, requires being able to express the target language at a higher level, in order to circumvent that language's parser.
Again, it really has a lot to do with the language that you're targeting, and how good a job you want to do. I've hacked together things like this for generating Python myself, in a situation where I knew I didn't have time to do things right.
EDIT: oh, you're doing Python too? Bleah! :)
You have a number of different choices. Your cleanest choice is to generate a representation of Python AST nodes, so you can either inject them directly or use existing serialization. You're going to ask me whether there are libraries for this, and ... I fergits. I do believe that the current Python architecture includes ... okay, yes, I went and looked, and you're in good shape. Python's "Parser" module generates ASTs, and it looks like the AST module can be constructed directly.
https://docs.python.org/3/library/ast.html#module-ast
I'm guessing your cleanest path would be to generate JSON that represents these AST modules, then write a Python stub that translates these to Python ASTs.
All of this assumes that you want to take the high road; there's a broad spectrum of in-between approaches involving simple generalizations of python syntax (e.g.: oh, it looks like this kind of statement has a colon followed by an indented block of code, etc.).
If your source language shares syntax with Racket, then use read-syntax to produce a syntax-object representing the input program. Then use recursive descent using syntax-case or syntax-parse to discern between the various constructs.
Instead of printing directly to an output port, I recommend building a tree of elements (strings, numbers, symbols etc). The last step is then to print all the elements of the tree. Representing the output using a tree is very flexible and allows you to handle sub expressions out of order. It also allows you to efficiently concatenate output from different sources.
Macros are not needed.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
AI is implemented in a variety of different languages, e.g., Python, C/C++, Java, so could someone please explain to me how exactly does using Lisp allow one to perform #5 (mentioned here by Peter Norvig):
[Lisp allows for..] A macro system that let developers create a domain-specific level of
abstraction in which to build the next level. ... today, (5) is the
only remaining feature in which Lisp excels compared to other
languages.
Source: https://www.quora.com/Is-it-true-that-Lisp-is-highly-used-programming-language-in-AI
I'm basically confused by what it means to create a domain-specific level of abstraction. Could someone please provide a concrete example/application of when/how this would be useful and, just in general, what it means? I tried reading http://lambda-the-ultimate.org/node/4765 but didn't really "get the big picture." However, I felt like there is some sort of magic here, in that Lisp allows you to write the kind of code that other procedural/OOP/functional languages won't let you. Then I came across this post: https://www.quora.com/Which-programming-language-is-better-for-artificial-intelligence-C-or-C++, where the top answer states:
For generic Artificial Intelligence, I would choose neither and
program in LISP. A real AI would have a lot of self-modifying code
(you don't think a real AI would take what its programmer wrote as The
Last Word, would you?).
This got me even more intrigued, which led me to wonder:
What exactly would it mean for an AI to have "self-inspecting, self-modifying code" (source: Why is Lisp used for AI?) and, again, why/how this is useful? It sounds very cool (almost like as if the AI is self-conscious about its own operations, so to speak), and it sounds like using Lisp allows one to accomplish these kinds of things, where other languages wouldn't even dream of it (sorry if this comes off as naively jolly, I have absolutely no experience in Lisp, but am excited to get started). I read a little bit of: What are the uses of self modifying code? and immediately became interested in the prospect of specific AI applications and future frontiers of self-modifying code.
Anyway, I can definitely see the connection between having the ability to write self-modifying code, and having the ability to tailor the language atop of your specific research problem domain (which is what I assume Peter Norvig implies in his post), but I really am quite unsure in what any of this really means, and I would like to understand the nuts-and-bolts (or even just the essence), of these two aspects presented above ("domain-specific level of abstraction" and "self-inspecting, self-modifying code") in a clear way.
[Lisp allows for..] A macro system that let developers create a domain-specific level of abstraction in which to build the next level. ... today, (5) is the only remaining feature in which Lisp excels compared to other languages.
This might be too broad for Stack Overflow, but the concept of domain-specific abstraction is one that happens in lots of languages, it's just much easier in Common Lisp. For instance, in C, when you make a call to open(), you get back a file descriptor. It's a small integer, but if you're adhering to the domain model, you don't care that it's an integer, you care that it's a file descriptor, and that it makes sense to use it where a file descriptor is intended. It's a leaky abstraction, though, because those calls tend to signal errors by returning negative integers, so you do actually have to care about the fact that a file descriptor is an integer, so that you can reliably compare the result and figure out whether it was an error or not. In C, you can define structures, or record types, that bundle some information together. That provides a slightly higher amount of abstraction.
The idea of abstraction is that you can think about how something is supposed to be used, and what it represents, and think in terms of the domain, not in terms of the representation. All programming languages support this in some sense, but Common Lisp makes it much easier to build up language constructs that look just like the builtins of the language, and help to avoid redundant (and error-prone) boilerplate.
For instance, if you're writing a natural deduction style theorem prover, you'll need to define a bunch of inference rules and make them available a proof system. Some of those rules will be more simplistic, and won't need to know about the current proof scope. For instance, to check whether a use of conjunction elimination (from A∧B, infer A (or B)) is legal, you just need to check the forms of the premise and the conclusion. Without abstraction, you might have to write:
(defun check-conjunction-elimination (premises conclusion context)
(declare (ignore context))
(and (= (length premises) 1)
(typep (first premises) 'conjunction)
(member conclusion (conjunction-conjuncts (first premises))
:test 'proposition=)))
(register-inference-rule "conjunction elimination" 'check-conjunction-elimination)
With the ability to define abstractions, you can write a pattern matcher that could simplify this to:
(defun check-conjunction-elimination (premises conclusion context)
(declare (ignore context))
(proposition-case (premises conclusion)
(((and A B) A) t)
(((and A B) B) t)))
(register-inference-rule "conjunction elimination" 'check-conjunction-elimination)
(Sure, some languages have pattern matching built in (Haskell, Prolog (in a sense)), but the point is that pattern matching is a procedural process, and you can implement it in any language. However, it's a code generation process, and in most languages, you'd have to do the code generation as a separate pass during compilation. With Common Lisp, it's part of the language.)
You could abstract that pattern into:
(define-simple-inference-rule "conjunction elimination" (premises conclusion)
((and A B) A)
((and A B) B)))
And you'd still be generating the original code. This kind of abstraction saves a lot of space, and it means that when someone else comes in, they don't need to know all of Common Lisp, they just need to know how to use define-simple-inference-rule. (Of course, that does add some overhead, since it's something else that they do need to know how to use.) But the idea is still there: the code corresponds to the way you talk about the domain, not the way the programming language works.
As to "self modifying code", I think it's a term that you'll hear more than you'll actually see good uses of. In the sense of macroexpansion described above, there's a kind of self-modifying code (in that the macroexpansion code knows how to "modify" or transform the code into the something else, but I don't think that's a great example). Since you have access to eval, you can modify code as an object and evaluate it, but I don't think many people really advocate for that. Being able to redefine code on the fly can be handy, but again, I think you'll see people doing that much more in the REPL than in programs.
However, being able to return closures (something which more and more languages are supporting) is a big help. For instance, trace is "sort of" self modifying. You could implement it something like this:
(defmacro trace (name)
(let ((f (symbol-function name)))
(setf (symbol-function name)
(lambda (&rest args)
(print (list* name args))
(apply f args)))))
You'd need to do something more to support untrace, but I think the point is fairly clear; you can do things that change the behavior of functions, etc., in predictable ways, at run time. trace and logging facilities are an easy example, but if a system decided to profile some of the methods that it knows are important, it could dynamically decide to start caching some of the results, or doing other interesting things. That's a kind of "self modification" that could be quite helpful.
The following PL code does not work under #lang pl:
Edited code according to Alexis Kings answer
(define-type BINTREE
[Leaf Number]
[Node BINTREE BINTREE])
(: retrieve-leaf : BINTREE -> Number)
(define (retrieve-leaf btree)
(match btree
[(Leaf number) number])
What i'd like to achieve is as follows:
Receive a BINTREE as input
Check whether the tree is simply a leaf
Return the leaf numerical value
This might be a basic question but how would I go about solving this?
EDIT: The above seems to work if cases is used instead of match.
Why is that?
As you've discovered, match and cases are two similar but separate
things. The first is used for general Racket values, and the second is
used for things that you defined with define-type. Unfortunately,
they don't mix well in either direction, so if you have a defined type
then you need to use cases.
As for the reason for that, it's kind of complicated... One thing is
that the pl language was made well before match was powerful enough
to deal with arbitrary values conveniently. It does now, but it cannot
be easily tweaked to do what cases does: the idea behind define-type
is to make programming simple by making it mandatory to use just
cases for such values --- there are no field accessors, no predicates
for the variants (just for the whole type), and certainly no mutation.
Still, it is possible to do anything you need with just cases. If you
read around, the core idea is to mimic disjoint union types in HM
languages like ML and Haskell, and with only cases pattern matching
available, many functions are easy to start since there's a single way
to deal with them.
match and Typed Racket got closer to being able to do these things,
but it's still not really powerful enough to do all of that --- which is
why cases will stay separate from match in the near future.
As a side note, this is in contrast to what I want --- I know that this
is often a point of confusion, so I'd love to have just match used
throughout. Maybe I'll break at some point and hack things so that
cases is also called match, and the contents of the branches would
be used to guess if you really need the real match or the cases
version. But that would really be a crude hack.
I think you're on the right track, but your match syntax isn't correct. It should look like this:
(: retrieve-leaf : BINTREE -> Number)
(define (retrieve-leaf btree)
(match btree
[(Leaf number) number]))
The match pattern clauses must be inside the match form. Additionally, number is just a binding, not a procedure, so it doesn't need to be in parens.
I'm trying to build a website with Luminus in order to learn a bit of Clojure. I've had years of imperative experience but only now getting into functional programming. Right now, I'm trying to process a signed_request object from Facebook.
According to the site I have to:
Split a string on a period (".") and get a vector of 2 strings.
Take the first of those strings, decode it w/ base64 and compare with a secret.
Take the second of those strings, decode it w/ base64 and decode again with JSON.
This is really straight-forward if I was doing it in an imperative language, but I am clueless when it comes to a functional approach. Right now I've only gotten as far as finding out how to split the string into a vector of 2 strings:
(defn parse-request [signed_request]
((clojure.string/split signed_request #"\.")
))
(defn redirect-page [signed_request]
(layout/render "redirect.html"
{:parsed_request parse-request(signed_request)}))
(defroutes home-routes
(GET "/" [] (home-page))
(POST "/redirect" [signed_request] (redirect-page signed_request)))
redirect-page is run when the server receives the POST request, and then it takes the signed_request and passes it into the parse-request function. What is the functional way to approach this?
I think the basic answer to your question is that functional programming is more about input and output (think of a mathematical function), whereas imperative programming tends to be more about side effects and mutable data. Instead of thinking, "what do I need to do?", think, "what kind of data structure is my goal, and how do I define it?" You can have side effects too (like printing), but generally you are writing pure functions that take arguments and return something.
Destructuring is an invaluable tool in Clojure, and it would come in handy here. You want to split a string into two strings using clojure.string/split, and then do something with one of the strings and something else with the other. You could use a let binding to assign names to each string, like this:
(let [[str1 str2] (clojure.string/split signed-request #"\.")]
(do-stuff-with-str1-and-str2))
I'm not too familiar with this specific problem, but based on the 3 steps you listed, it sounds like you will be getting 2 results, one from each string. So, maybe you should focus on writing a function that returns a vector containing the 2 results, like this:
(defn process-signed-request [signed-request]
(let [[str1 str2] (clojure.string/split signed-request #"\.")]
[(compare-fn (decode-with-base-64 str1) secret)
(decode-with-json (decode-with-base-64 str2))]))
Note that the above is partially pseudocode -- you will need to replace compare-fn, decode-with-base-64, decode-with-json, and secret with the actual code representing these things -- that, or you can leave it as-is and just implement the functions so that compare-fn, decode-with-base-64 and decode-with-json refer to actual functions that you write. This is a common approach in functional programming -- write a short higher-level function that defines the solution to the problem, and then go back and write the helper functions that it uses.
By the way, there are a couple of other ways you can write the (decode-with-json (decode-with-base-64 str2)) part:
((comp decode-with-json decode-with-base-64) str2)
(-> str2 decode-with-base-64 decode-with-json)
I often find the second approach, using a threading macro (-> or ->>) helpful when I know the sequence of things that I need to do with an object, and I want the code to read intuitively. I find it easier to read, like "take str2, decode it with base 64, and then decode it again with JSON."
Also, this is just a nitpicky thing, but pay attention to the order of parentheses when you're coding in Clojure. The way you have your code now, the parentheses should look like this:
(defn parse-request [signed_request]
(clojure.string/split signed_request #"\."))
(defn redirect-page [signed_request]
(layout/render "redirect.html"
{:parsed_request (parse-request signed_request)}))
If you have a lot of imperative experience, you're probably so ingrained in the fn(x) syntax that you accidentally type that instead of the (fn x) Lisp syntax that Clojure uses.
And while I'm nitpicking, it's idiomatic in Clojure to use hyphens instead of underscores for symbol naming. So, I would rename signed_request and :parsed_request to signed-request and :parsed-request.
Hope that helps!