Misunderstanding Clojure's require - import

I have 2 files: enviro.clj and point.clj; both in the same folder.
I want to import point.clj into enviro.clj.
enviro.clj:
(ns game-of-life.enviro
(:require [game_of_life.point :as point]))
(defrecord Enviro [cells dims])
(defn create-dead-enviro [width height]
(Enviro.
(replicate (* width height) :dead)
(point/Point. width height)))
point.clj:
(ns game-of-life.point)
; A 2D point representing a coordinate, or any pair of numbers
(defrecord Point [x y])
With this set-up though, Intellij (with Cursive) is saying that it can't resolve point/Point. inside of create-dead-enviro. It does however suggest importing it. If I allow it to auto-fix it, it changes the top of enviro.clj to:
(ns game-of-life.enviro
(:require [game_of_life.point :as point])
(:import (game_of_life.point Point)))
From what I've read though, import is only for Java interop to import a Java class; it's not used to "import" a Clojure namespace.
What am I missing here?
Edit
Still no. I changed enviro.clj to:
(ns game-of-life.enviro
(:require [game_of_life.point :as point]))
(defrecord Enviro [cells dims])
(defn create-dead-enviro [width height]
(Enviro.
(replicate (* width height) :dead)
(->Point width height)))
And I'm still getting an "cannot resolve" error.

This is a bit of specialness around records and only records where you need to import them if you want to use the (recordName. args) java interop constructor form.
if you use the ->Enviro helper function you don't need to add the extra import.
user> (defrecord Enviro [cells dims])
user.Enviro
user> (->Enviro 1 2)
#user.Enviro{:cells 1, :dims 2}
and it's a bit more clojure'ish do do it that way anyway.
Recort types are a quick way to define a named type to interacting with java libraries that expect this. They are also slightly faster for field access than maps. When using records keep in mind that if you conj some extra field into them in the course of working with them, then remove it later, they will silently stop being records and revert back to being normal maps. In general use records when you know you need them for java interop or in very tightly optimized code that you have already very carefully benchmarked (I have never seen this in practice). They have some value for documentation as well.
here is an example of using the ->recordName function instead of the java interop form.
user> (ns game-of-life.point)
nil
game-of-life.point> (defrecord Point [x y])
game_of_life.point.Point
game-of-life.point> (in-ns 'user)
#namespace[user]
user> (require '[game-of-life.point :as point])
nil
user> (point/->Point 1 2)
#game_of_life.point.Point{:x 1, :y 2}
Because the java interop form generates a names class outside the usual namespace onventions you need to import that class iff you use the className. constructor or use an explicit call to new to create your record object. if you use the automatically created function ->className then you don't need to use import

Related

Monger session store

Following a previous question where I asked on how on earth do sessions work in Clojure, I have been experimenting with Monger.
In the documentation, you can find the code snippet:
(ns monger.docs.examples
(:require [monger.core :as mg]
[monger.ring.session-store :refer [monger-store]]))
;; create a new store, typically passed to server handlers
;; with libraries like Compojure
(let [conn (mg/connect)
db (mg/get-db conn "monger-test")]
(monger-store db "sessions"))
which is helpful, but I don't know how to implement the handler. Is there anyone who explain how this would work interacting with a handler, or being embedded in a handler itself?
EDIT:
So far I've tried:
(def app-handler
(let [{:keys [_ db]} (mg/connect-via-uri (env :mongo-uri))]
(-> handler
(session/wrap-session {:store (session-store db "sessions")}))))
but get:
java.lang.ClassCastException: class java.lang.String cannot be cast to class clojure.lang.Associative (java.lang.String is in module java.base of loader 'bootstrap'; clojure.lang.Associative is in unnamed module of loader 'app')
So, it obviously doesn't like the mapping in-front, but this is the pattern I've seen everywhere else. Any ideas are (and explanations) would be wonderful!
What is handler? Can you add a bit more code of what you tried?
According to the error message somewhere you return a string, where a map is expected.
Note that session-store should return an implementation of ring.middleware.session.store/SessionStore. See wrap-session.

Scheme (Kawa) - How to force macro expansion inside another macro

I want to make a macro, that when used in class definition creates a field, it's public setter, and an annotation. However, it'd seem the macro is not expanding, mostly because it's used inside other (class definition) macro.
Here is an example how to define a class with one field:
(define-simple-class test-class ()
(foo :: java.util.List ))
My macro (only defines field as of now):
(define-syntax autowire
(syntax-rules ()
((autowire class id)
(id :: class))))
However, if I try to use it:
(define-simple-class test-class ()
(autowire java.util.List foo))
and query fields of the new class via reflection, I can see that it creates a field named autowire, and foo is nowhere to be seen. Looks like an issue of the order the macros are expanded.
Yes, macros are expanded “from the outside in”. After expanding define-simple-class, the subform (autowire java.util.List foo) does not exist anymore.
If you want this kind of behaviour modification, you need to define your own define-not-so-simple-class macro, which might expand to a define-simple-class form.
However, please step back before making such a minor tweak to something that is standard, and ask yourself whether it is worth it. The upside might be syntax that is slightly better aligned to the way you think, but the downside is that it might be worse aligned to the way others think (who might need to understand your code). There is a maxim for maintainable and readable coding: “be conventional”.

Avoiding global state in DAO layer of a Clojure application

I'm trying to implement the ideas from http://thinkrelevance.com/blog/2013/06/04/clojure-workflow-reloaded into my codebase.
I have a dao layer, where I now need to pass in a database in order to avoid global state. One thing that is throwing me off is the phrase:
Any function which needs one of these components has to take it as a
parameter. This isn't as burdensome as it might seem: each function
gets, at most, one extra argument providing the "context" in which it
operates. That context could be the entire system object, but more
often will be some subset. With judicious use of lexical closures, the
extra arguments disappear from most code.
Where should I use closures in order to avoid passing global state for every call? One example would be to create an init function in the dao layer, something like this:
(defprotocol Persistable
(collection-name [this]))
(def save nil)
(defn init [{:keys [db]}]
(alter-var-root #'save (fn [_] (fn [obj] (mc/insert-and-return db (collection-name obj) obj WriteConcern/SAFE)))))
This way I can initiate my dao layer from the system/start function like this:
(defn start
[{:keys [db] :as system}]
(let [d (-> db
(mc/connect)
(mc/get-db "my-test"))]
(dao/init d)
(assoc system :db d)))
This works, but it feels a bit icky. Is there a better way? If possible I would like to avoid forcing clients of my dao layer to have to pass a database every time it uses a function.
You can use higher order function to represent your DAO layer - that's the crux of functional programming, using functions to represent small to large parts of your system. So you have a higher order function which takes in the DB connection as param and return you another function which you can use to call various operations like save, delete etc on the database. Below is one such example:
(defn db-layer [db-connection]
(let [db-operations {:save (fn [obj] (save db-connection obj))
:delete (fn [obj] (delete db-connection obj))
:query (fn [query] (query db-connection query))}]
(fn [operation & params]
(-> (db-operations operation) (apply params)))))
Usage of DB layer:
(let [my-db (create-database)
db-layer-fn (db-layer my-db)]
(db-layer-fn :save "abc")
(db-layer-fn :delete "abc"))
This is just an example of how higher order functions can allow you to sort of create a context for another set of functions. You can take this concept even further by combining it with other Clojure features like protocols.

How do I deal with required Clojurescript code from Clojurescript macros?

Let us say I have a X.clojurescript and a X.clojure namespace. Everything in X.clojurescript is Clojurescript code, everything in X.clojure is Clojure code. Unfortunately, I cannot define macros directly in Clojurescript, I have to define them in Clojure and then bring them into a Clojurescript namespace using
(ns X.clojurescript.abc
(:require-macros [X.clojure.def :as clj]))
This is fine. However, what if the macro (defined in X.clojure) is going to need to reference something defined in a Clojurescript namespace (X.clojurescript)? The problem is that the Clojure compiler does not look in my Clojurescript namespace (a separate directory) when resolving other namespaces.
I have gotten around this problem by simply creating a namespace in my Clojure code that has the same namespace and needed definition as exist in Clojurescript, but this seems kind of stupid. So, for instance, if I need X.clojurescript.abc.y in my macro, I will just create an additional namespace on the Clojure side that defs a dummy y in my Clojure version of X.clojurescript.abc; kind of dumb.
How do I deal with a macro that needs to refer to something on the Clojurescript side?
The only time a macro needs a specific namespace at the time of definition is if the macro is using code from said namespace to generate the list of symbols it will return.
you can follow along with these examples in the repl:
(defmacro foo
[a]
`(bar/bar ~a))
the definition of foo will compile even though bar is not a defined namespace
(foo :a)
calling foo will now fail because you have not defined the bar namespace, or the function bar yet
(ns bar)
(defn bar
[x]
[x x])
defines bar in the bar namespace
(ns user)
(foo :a)
=> [:a :a]
Notice that bar does not need to exist at the time of foo's definition. In fact the namespace does not even need to exist at the time of foo's definition.

In common-lisp how can i override/change evaluation behaviour for a specific type of object?

In common-lisp, I want to implement a kind of reference system like this:
Suppose that I have:
(defclass reference () ((host) (port) (file)))
and also I have:
(defun fetch-remote-value (reference) ...) which fetches and deserializes a lisp object.
How could I intervene in the evaluation process so as whenever a reference object is being evaluated, the remote value gets fetched and re-evaluated again to produce the final result?
EDIT:
A more elaborate description of what I want to accomplish:
Using cl-store I serialize lisp objects and send them to a remote file(or db or anything) to be saved. Upon successful storage I keep the host,port and file in a reference object. I would like, whenever eval gets called on a reference object, to first retrieve the object, and then call eval on the retrieved value. Since a reference can be also serialized in other (parent) objects or aggregate types, I can get free recursive remote reference resolution by modyfing eval so i dont have to traverse and resolve the loaded object's child references myself.
EDIT:
Since objects always evaluate to themselves, my question is a bit wrongly posed. Essentially what I would like to do is:
I would like intercept the evaluation of symbols so that when their value is an object of type REFERENCE then instead of returning the object as the result of the symbol evaluation, to return the result of (fetch-remote-value object) ?
In short: you cannot do this, except by rewriting the function eval and modifying your Lisp's compiler. The rules of evaluation are fixed Lisp standard.
Edit After reading the augmented question, I don't think, that you can achieve full transperency for your references here. In a scenario like
(defclass foo () (reference :accessor ref))
(ref some-foo)
The result of the call to ref is simply a value; it will not be considered for evaluation regardless of its type.
Of course, you could define your accessors in a way, which does the resolution transparently:
(defmacro defresolver (name class slot)
`(defmethod ,name ((inst ,class))
(fetch-remote-reference (slot-value inst ',slot))))
(defresolver foo-reference foo reference)
Edit You can (sort of) hook into the symbol resolution mechanism of Common Lisp using symbol macros:
(defmacro let-with-resolution (bindings &body body)
`(symbol-macrolet ,(mapcar #'(lambda (form) (list (car form) `(fetch-aux ,(cadr form)))) bindings) ,#body))
(defmethod fetch-aux ((any t)) any)
(defmethod fetch-aux ((any reference)) (fetch-remote-reference any))
However, now things become pretty arcane; and the variables are no longer variables, but magic symbols, which merely look like variables. For example, modifying the content of a variable "bound" by this macro is not possible. The best you can do with this approach is to provide a setf expansion for fetch-aux, which modifies the original place.
Although libraries for lazy evaluatione and object persistence bring you part of the way, Common Lisp does not provide a portable way to implement fully transparent persistent values. Lazy or persistent values still have to be explicitly forced.
MOP can be used to implement lazy or persistent objects though, with the slot values transparently forced. It would take a change in the internals of the Common Lisp implementations to provide general transparency, so you could do e.g. (+ p 5) with p potentially holding a persistent or lazy value.
It is not possible to directly change the evaluation mechanisms. You would need to write a compiler for your code to something else. Kind of an embedded language.
On the CLOS level there are several ways to deal with it:
Two examples:
write functions that dispatch on the reference object:
(defmethod move ((object reference) position)
(move (dereference reference) position))
(defmethod move ((object automobile) position)
...))
This gets ugly and might be automated with a macro.
CHANGE-CLASS
CLOS objects already have an indirection, because they can change their class. Even though they may change their class, they keep their identity. CHANGE-CLASS is destructively modifying the instance.
So that would make it possible to pass around reference objects and at some point load the data, change the reference object to some other class and set the slots accordingly. This changing the class needs to be triggered somewhere in the code.
One way to have it automagically triggered might be an error handler that catches some kinds of errors involving reference object.
I would add a layer on top of your deserialize mechanism that dispatches based on the type of the incoming data.