I have inherited a clojure app using the following components:
Jetty server
Compojure
Ring
In order to gain an understanding of the app, I'd like to step through requests.
I'm using Emacs as my IDE.
Is there any tool or techniques I can use to accomplish this ?
Sadly, Clojure doesn't have any readily available step debugger. One can connect to the jvm with jdb and step through the bytecode, but this will not be a direct reflection of your Clojure code (especially thanks to things like laziness, potentially causing certain code to be evaluated from different contexts in the app than the source layout would lead you to expect).
All is not lost though. Because there is a strong focus on using immutible data and pure functions in idiomatic Clojure code, it is straightforward to capture the values of the inputs your functions get at runtime, in order to investigate and / or experiment with new values. For example:
inside your core namespace, you define your handler and launch jetty, and start an nrepl server from the same process:
(ns my-server.core
(:require [ring.middleware
[json :refer (wrap-json-params)]
[multipart-params :refer (wrap-multipart-params)]]
...
[clojure.tools.nrepl.server :as nrepl-server]))
...
(defn init
[]
(when (= (System/getProperty "with_shell") "true")
(nrepl-server/start-server :port 7888))
(run-jetty handler :port 8080))
within a namespace with the code serving a particular request, you can keep track of incoming data in order to use it / investigate it in the repl
(ns my-ns.controllers.home)
(defonce debug (atom []))
(defn home
[request]
(let [in (java.util.Date.)
response (process request)
out (java.util.Date.)]
(swap! debug conj {:in request :out response :fn home :timing [in out]})
response))
Then, from the repl connection you can query the state of my-ns.controllers.hom/debug, by derefing the atom and seeing the various input and output values. This can be generalized to investigate the contents of various intermediate values, whereever you want to track execution. Since the data objects are more often than not immutible, you can have a full record to walk through if you create an atom to store the values. Note that by creating timestamps before and after calculating the request, you can profile the execution of the request handling function's body (which I have abstracted to a single function for clarity here).
There are also libraries like clojure.tools.trace if you want to use print based tracing.
Related
I'm using hunchentoot session values to make my server code re-entrant. Problem is that session values are, by definition, retained during the session, i.e., from one call from the same browser to the next, whereas what I really am looking for is what amount to thread-specific re-entrancy, so that all the values disappear between calls -- I want to treat each click as a separate "from scratch" event, even if they are from the same session . Easy enough to have the driver either set to nil, or delete my session values, but I'm wondering if there's a "correct" way to do this? I don't see any thread-based analog to hunchentoot:session-value in the documentation.
Thanks in advance for any guidance you can offer.
If you want a value to be "thread specific" and at the same time to be "from scratch" on every request, that requires that every request must be dispatched in a brand new thread. This is not the case according to the Hunchentoot documentation, which says that two models are supported: a single-threaded taskmaster and a thread-per-connection taskmaster.
If your configuration is multi-threaded, then a thread-specific variable bound in a request-handling can therefore be expected to be per-connection. In a single-threaded Hunchentoot setup, it will effectively be global, tied to the request servicing thread.
A thread-based analog to hunchentoot:session-value probably doesn't exist because it would only introduce behaviors into the web app which surprisingly change if the threading model is reconfigured, or if the request pattern from the browser changes. A browser can make multiple requests using the same connection, or close the connection between requests.
To extend the request objects with custom per-request, I would look into, perhaps, subclassing from the acceptor (how to do this is described in the docs). My custom acceptor would have a custom method of the process-connection generic function which would create extended/subclasses request objects carrying the extra stuff I wanted to put into a request.
Another way would be to have some global weak hash which binds request objects as keys to additional information.
I'm evaluating backbone.js as a potential javascript library for use in an application which will have a few different backends: WebSocket, REST, and 3rd party library producing JSON. I've read some opinions that backbone.js works beautifully with RESTful backends so long as the api is 'by the book' and follows the appropriate http verbage. Can someone elaborate on what this means?
Also, how much trouble is it to get backbone.js to connect to WebSockets? Lastly, are there any issues with integrating a backbone.js model with a function which returns JSON - in other words does the data model always need to be served via REST?
Backbone's power is that it has an incredibly flexible and modular structure. It means that any part of Backbone you can use, extend, take out, or modify. This includes the AJAX functionality.
Backbone doesn't "care" where do you get the data for your collections or models. It will help you out by providing an out of the box RESTful "ajax" solution, but it won't be mad if you want to use something else!
This allows you to find (or write) any plugin you want to handle the server interaction. Just look on backplug.io, Google, and Github.
Specifically for Sockets there is backbone.iobind.
Can't find a plugin, no worries. I can tell you exactly how to write one (it's 100x easier than it sounds).
The first thing that you need to understand is that overwriting behavior is SUPER easy. There are 2 main ways:
Globally:
Backbone.Collection.prototype.sync = function() {
//screw you Backbone!!! You're completely useless I am doing my own thing
}
Per instance
var MySpecialCollection = Backbone.Collection.extend({
sync: function() {
//I like what you're doing with the ajax thing... Clever clever ;)
// But for a few collections I wanna do it my way. That cool?
});
And the only other thing you need to know is what happens when you call "fetch" on a collection. This is the "by the book"/"out of the box behavior" behavior:
collection#fetch is triggered by user (YOU). fetch will delegate the ACTUAL fetching (ajax, sockets, local storage, or even a function that instantly returns json) to some other function (collection#sync). Whatever function is in collection.sync has to has to take 3 arguments:
action: create (for creating), action: read (for fetching), delete (for deleting), or update (for updating) = CRUD.
context (this variable) - if you don't know what this does it, don't worry about it, not important for now
options - where da magic is. We only care about 1 option though
success: a callback that gets called when the data is "ready". THIS is the callback that collection#fetch is interested in because that's when it takes over and does it's thing. The only requirements is that sync passes it the following 1st argument
response: the actual data it got back
Now
has to return a success callback in it's options that gets executed when it's done getting the data. That function what it's responsible for is
Whenever collection#sync is done doing it's thing, collection#fetch takes back over (with that callback in passed in to success) and does the following nifty steps:
Calls set or reset (for these purposes they're roughly the same).
When set finishes, it triggers a sync event on the collection broadcasting to the world "yo I'm ready!!"
So what happens in set. Well bunch of stuff (deduping, parsing, sorting, parsing, removing, creating models, propagating changesand general maintenance). Don't worry about it. It works ;) What you need to worry about is how you can hook in to different parts of this process. The only two you should worry about (if your wraps data in weird ways) are
collection#parse for parsing a collection. Should accept raw JSON (or whatever format) that comes from the server/ajax/websocket/function/worker/whoknowwhat and turn it into an ARRAY of objects. Takes in for 1st argument resp (the JSON) and should spit out a mutated response for return. Easy peasy.
model#parse. Same as collection but it takes in the raw objects (i.e. imagine you iterate over the output of collection#parse) and splits out an "unwrapped" object.
Get off your computer and go to the beach because you finished your work in 1/100th the time you thought it would take.
That's all you need to know in order to implement whatever server system you want in place of the vanilla "ajax requests".
I'm looking to do some synchronous web-programming in Common Lisp, and I'm rounding up options. One of them is sw-http, an "HTTP server tailored for AJAX/Comet". The documentation seems to be a bit lacking because the only piece I could find tells you to
Sub-class SERVER and set the APPLICATION-FINDER-FN slot to a callback
that generates your content.
There doesn't seem to be any notes or examples about what that callback should look like (some prodding told me that it should expect a server and a connection as arguments, but nothing about what it should return or do).
setting it to something naive like
(lambda (server conn) (declare (ignore server conn)) "Hello world")
doesn't seem to do anything, so I assume I either need to write to a stream somewhere or interact with the server/connection in some less-than-perfectly-obvious way.
Any hints?
The handler takes a connection which has a response which has some chunks.
Presumably you're to add your content to the chunks (which are octets) of the response of the connection. Luckily there are some helper methods defined to make this easier.
You might try this (I couldn't get SW-HTTP to compile so I can't):
(defun hello (server connection)
(let*((response (cn-response connection))
(chunks (rs-chunks response)))
(queue-push chunks
(mk-response-status-code 200)
(queue-push chunks
(mk-response-message-body "Hello cruel world"))))
(defclass my-server (server)
((application-finder-fn :initform #'hello)))
Good luck!
I looked up the dbus package and it seems like all of the functions are built-in to the C source code and there's no documentation for them.
How do I use the dbus-call-method function?
I just had the same problem and found the emacs-fu article that comes up when googling a little too basic for my needs.
In particular I wanted to export my own elisp methods via dbus, and had problems making sense of the dbus terminology and how it applies to the emacs dbus interface.
First thing to check out, the emacs documentation, C-h f dbus-register-method
dbus-register-method is a built-in function in `C source code'.
(dbus-register-method BUS SERVICE PATH INTERFACE METHOD HANDLER)
Register for method METHOD on the D-Bus BUS.
BUS is either the symbol `:system' or the symbol `:session'.
SERVICE is the D-Bus service name of the D-Bus object METHOD is
registered for. It must be a known name.
PATH is the D-Bus object path SERVICE is registered. INTERFACE is the
interface offered by SERVICE. It must provide METHOD. HANDLER is a
Lisp function to be called when a method call is received. It must
accept the input arguments of METHOD. The return value of HANDLER is
used for composing the returning D-Bus message.
BUS is just going to be :session or :system (where you probably almost always want to use :session like a desktop application I suppose).
SERVICE is a unique name for the application on the bus, like an address or domain name. Dbus.el defines dbus-service-emacs as "org.gnu.Emacs".
PATH is to different types of application functionality what SERVICE is to different applications itself. For example a certain emacs module might expose functionality in the /ModuleName PATH under the org.gnu.Emacs SERVICE.
INTERFACE is just like an interface in programming. It is a specification that tells other dbus clients how to communicate with the object(s) your application exposes. It contains for example type signatures for your methods.
So you might have an interface that says something like: under the service org.gnu.Emacs, in the path /ModuleName, you will find a method named helloworld that will take zero arguments and return a string.
The difficult thing to figure out for me was: how do I define an interface for my method?
Poking around dbus.el you'll find that there is dbus-interface-introspectable (among others) defined, that just contains a string "org.freedesktop.DBus.Introspectable", which names a standard interface that just exposes one method:
org.freedesktop.DBus.Introspectable.Introspect (out STRING xml_data)
(link to the spec http://dbus.freedesktop.org/doc/dbus-specification.html#standard-interfaces-introspectable)
And that is the method which is called by clients to find out about what applications expose on the dbus. So we can use that method to look at how other applications advertise their stuff on dbus, and then we can implement our own Introspect method just mimicking what the others are doing and everything will be fine.
Note however that the spec says that applications may implement the Introspectable interface, they don't have to. In fact you can call dbus-register-method just fine with an empty string as interface (anything will do it seems). You will be able to call your method. However I got always NoReply errors and problems with applications hanging waiting for a response from dbus which went away when I figured out how to make my stuff introspectable. So I assume that Introspect() is expected quite often.
So lets do this:
(defun say-world ()
;; you need to map between dbus and emacs datatypes, that's what :string is for
;; if you're returning just one value that should work automatically, otherwise
;; you're expected to put your return values in a list like I am doing here
(list :string "world"))
(dbus-register-method
:session
"org.test.emacs"
"/helloworld"
"org.test.emacs"
"hello"
'say-world)
That is what we want to implement and therefore want to define an interface for (named "org.test.emacs"). You can use it just like that and try to call the hello method with qdbus org.test.emacs /helloworld org.test.emacs.hello. It should work, for me it works only after 20 seconds of waiting (making the application hang), but it works.
Now lets make it introspectable:
(defun dbus-test-slash-introspect ()
"<node name='/'>
<interface name='org.freedesktop.DBus.Introspectable'>
<method name='Introspect'>
<arg name='xml_data' type='s' direction='out'/>
</method>
</interface>
<node name='helloworld'>
</node>
</node>")
(dbus-register-method
:session
"org.test.emacs"
"/"
dbus-interface-introspectable
"Introspect"
'dbus-test-slash-introspect)
(defun dbus-test-slash-helloworld-introspect ()
"<node name='/helloworld'>
<interface name='org.freedesktop.DBus.Introspectable'>
<method name='Introspect'>
<arg name='xml_data' type='s' direction='out'/>
</method>
</interface>
<interface name='org.test.emacs'>
<method name='hello'>
<arg name='' direction='out' type='s' />
</method>
</interface>
</node>")
(dbus-register-method
:session
"org.test.emacs"
"/helloworld"
dbus-interface-introspectable
"Introspect"
'dbus-test-slash-helloworld-introspect)
There we go. We just define two Introspect methods (one for each level of our path hierachy) and return some hand written xml telling other applications about the /helloworld path and the hello method within it. Note that dbus-test-slash-helloworld-introspect contains <interface name="org.test.emacs">...</interface> that has a type signature for our method, that is, as far as I am concerned, the definition of the interface we used when we registered our method with dbus.
Evaluate all that and poke around with qdbus:
~> qdbus org.test.emacs
/
/helloworld
~> qdbus org.test.emacs /
method QString org.freedesktop.DBus.Introspectable.Introspect()
~> qdbus org.test.emacs /helloworld
method QString org.freedesktop.DBus.Introspectable.Introspect()
method QString org.test.emacs.helloworld()
~> qdbus org.test.emacs /helloworld org.test.emacs.hello
world
Hooray, works as expected, no hanging or NoReply errors.
One last thing, you might try to test your method like so:
(dbus-call-method :session "org.test.emacs" "/helloworld" "org.test.emacs" "hello" :timeout 1000)
and find that it just timeouts and wonder why. Thats because if you register and call a method from within the same emacs instance then emacs will wait for itself to answer. There is no fancy threading going on, you will always get a NoReply answer in that situation.
If you have to call and register a method within the same emacs instance you can use dbus-call-method-asynchronously like so:
(defun handle-hello (hello)
(print hello))
(dbus-call-method-asynchronously :session "org.test.emacs" "/helloworld" "org.test.emacs" "hello" 'handle-hello)
Google to the rescue... Follow the link for the example, it's not my code so I won't put it here.
http://emacs-fu.blogspot.com/2009/01/using-d-bus-example.html
Here is a safe way to test for dbus capabilities:
(defun dbus-capable ()
"Check if dbus is available"
(unwind-protect
(let (retval)
(condition-case ex
(setq retval (dbus-ping :session "org.freedesktop.Notifications"))
('error
(message (format "Error: %s - No dbus" ex))))
retval)))
And here is a way to send a dbus notification:
(defun mbug-desktop-notification (summary body timeout icon)
"call notification-daemon method METHOD with ARGS over dbus"
(if (dbus-capable)
(dbus-call-method
:session ; Session (not system) bus
"org.freedesktop.Notifications" ; Service name
"/org/freedesktop/Notifications" ; Service path
"org.freedesktop.Notifications" "Notify" ; Method
"emacs"
0
icon
summary
body
'(:array)
'(:array :signature "{sv}")
':int32 timeout)
(message "Oh well, you're still notified")))
Or, just evaluate the following within Emacs:
(info "(dbus)")
My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like:
fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data)
running multiple instances of the same script in different threads instead of spinning up a new process for each one
expose an API for caching expensive data and storing persistent state from one script invocation to the next
Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers.
Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about.
A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks.
HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API):
def run (config, context, cache) :
results = http_library_call (config.url, config.http_method, config.username, config.password, ...)
return { html : results.html, status_code : results.status, headers : results.response_headers }
def init(config, context, cache) :
config.max_threads = 20 # up to 20 URLs at one time (per process)
config.max_processes = 3 # launch up to 3 concurrent processes
config.keepalive = 1200 # keep process alive for 10 mins without another call
config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks)
config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes
Database-data fetching script example might look like this (in pseudocode):
def run (config, context, cache) :
expensive = context.cache["something_expensive"]
for record in db_library_call (expensive, context.checkpoint, config.connection_string) :
context.log (record, "logDate") # log all properties, optionally specify name of timestamp property
last_date = record["logDate"]
context.checkpoint = last_date # persistent checkpoint, used next time through
def init(config, context, cache) :
cache["something_expensive"] = get_expensive_thing()
def shutdown(config, context, cache) :
expensive = cache["something_expensive"]
expensive.release_me()
Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.)
Specific questions:
is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this?
when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters)
My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural?
I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead?
Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?
Have a look at SQL Alchemy for dealing with database stuff in python. Also to make script writing easier for dealing with concurrency look into Stackless Python.