Using a dynamic variable binding in a leiningen plugin - plugins

I’ve got a lein plugin that manually runs my clojure.test code. It declares a dynamic variable baseuri that I wish to access from within my tests. I’ll strip out and change the code to get straight to the point. Here, inside of my plugin, I have a config file that creates the dynamic baseuri variable and sets it to a blank string.
;; myplugin
;; src/myplugin/config.clj
(ns leiningen.myplugin.config)
(def ^:dynamic baseuri "")
A task from within the plugin sets the dynamic baseuri variable and runs tests with clojure.test:
;; src/myplugin/runtests.clj
(ns leiningen.myplugin.runtests
(:require [leiningen.myplugin.config :as config]
[clojure.test]
[e2e.sometest]))
(defn run [project]
(binding [config/baseuri "https://google.com/"]
(println config/baseuri) ;; <-- prints google url
;; run clojure.test test cases from e2e.sometest namespace
;; This will call the `sampletest` test case
(clojure.test/run-tests e2e.sometest)
))
And inside of my clojure.test I try to use the baseuri variable but the binding doesn’t hold. It’s value is what I originally declared baseuri to be (an empty string)
;; tests/e2e/sometest.clj
(ns e2e.sometest
(:require [leiningen.myplugin.config :as config]))
(deftest sampletest
(println config/baseuri)) ;; <-- Prints an empty string instead of google url
I've edited the code to show in a basic manner how the clojure.test cases are run. I simply pass in the namespace I want to be run to the clojure.test/run-tests method.

I agree that clojure.test implementation is not optimal when it comes to parameterising your tests.
I am not sure why your binding form doesn't work - I have checked the code in clojure.test and I cannot see what could be wrong. I would check if:
the tests get executed in the same thread as the binding is established (maybe you could add logging the thread name/id in your plugin and in your tests)
different class loaders causing that your plugin namespace and its global dynamic variable is actually loaded and defined twice
I have one more idea (and I really don't want to criticise your solution, just trying to find alternative solutions :)): your problem is to pass a global configuration options to your code under test from external sources like test scripts configuration. Have you thought about passing them as environment variables? You could easily read them using (System/getenv "baseuri") or environ.

Maybe you have a dynamic var for very specific reasons, but, as you do not state so explicitly, I take a shot here.
Avoid dynamic rebinding of vars. At its best case, avoid global state at all, instead redefine your functions to take the baseuri as a parameter.
Or refactor your application to not have the need for static vars at all, like you have it right now.
EDIT My guess is that your functions:
(defn run [project]
(binding [config/baseuri "https://google.com/"]
(println config/baseuri) ;; <-- prints google url
;; runs clojure.test code here …
))
(deftest sampletest
(println config/baseuri))
are not connected in any way. At least I dont see how they should be. You are running a test and print some other var without rebinding it.
Maybe you could add a link to a repo to a minimal reproducable testcase to understand it better?

Related

ILE RPG Bind by reference using CRTSQLRPGI

I've been trying a solution for this, but. I cannot find it.
What I'm trying to do, is work with the "bind by reference" ability, but working with ILE RPG written with embedded sql.
I can use the BNDDIR ctl opt in my source. And everything works correctly.
But that means a "bind by copy" method. Checked deleting the SRVPGM and even the BINDDIR. And the caller program still works.
So, is there any way to use "bind by reference" in an ILERPGSQL program?
After my question, an example:
Program SNILOG is a module, that conains several procedures. Part of them, exported.
In QSRVSRC I set the exported procedures, with a source with the same name: SNILOG. Something like this:
STRPGMEXP PGMLVL(*CURRENT)
/************************************************** ******************/
/* *MODULE SNILOG INIGREDI 04/10/21 15:25:30 */
/************************************************** ******************/
EXPORT SYMBOL("GETDIAG_TOSTRING")
EXPORT SYMBOL("GETDIAGNOSTICS")
EXPORT SYMBOL("GRABAR_LOG")
EXPORT SYMBOL("SNILOG")
ENDPGMEXP
As part of the procedures are programmed with embedded sql, the compilation must be done with CRTSQLRPGI, using the parameter OBJTYPE(*SRVPGM).
So, I finally get a SRVPGM called SNILOG, with those 4 procedures exported.
Once I've got the SRVPGM, I add it to a BNDDIR called SNI_BNDDIR.
Ok, let's go to the caller program: SNI600V.
Defined with
dftactgrp(*no)
, of course!.
And compiled with CRTSQLRPGI and parameter OBJTYPE(*PGM).
Here, if I use the control spec
bnddir('SNI_BNDDIR')
, it works fine.
But not fine enough, as this is a "bind by copy" method (I can delete the SRVPGM or the BNDDIR, and it is still working fine).
When I'm not working with SQL, I can use the CRTPGM command, and I can set the BNDSRVPGM parameter, to set the SRVPGM the program is going to be called. Well, just their procedures...
But I cannot find any similar option in CRTSQLRPGI command.
Nor in opt codes in ctl-opt sentence (We have BNDDIR, but not BNDSRVPGM option).
Any idea?
I'm running V7R3M0 with TR level: 6
Thanks in advance!
the use of
bnddir('SNI_BNDDIR')
Is the way to bind by reference OR bind by copy.
The key is what does your BNDDIR look like?
If you want to bind by reference, then it should include *SRVPGM objects.
If you want to bind by copy, then it should include *MODULE objects.
Generally, you want a *BNDDIR for every *SRVPGM that includes the modules (and maybe a utility *SRVPGM or two) needed for building a specific *SRVPGM.
Then one or more *BNDDIR that includes just *SRVPGM objects that are used to build the programs that use those *SRVPGMs.

Why is Elixir Logger composed of Macros?

Had to use Logger in one of my applications today, and remembered that I needed to call require Logger first. So I decided to look at the Logger source code to see why debug, info, error, etc. are macros and not simple functions.
From the code, the macros for debug, info, etc (and even their underlying functions) look very simple. Wasn't it possible to simply export them as methods instead of macros?
From the Logger code:
defmacro log(level, chardata_or_fn, metadata \\ []) do
macro_log(level, chardata_or_fn, metadata, __CALLER__)
end
defp macro_log(level, data, metadata, caller) do
%{module: module, function: fun, file: file, line: line} = caller
caller =
compile_time_application ++
[module: module, function: form_fa(fun), file: file, line: line]
quote do
Logger.bare_log(unquote(level), unquote(data), unquote(caller) ++ unquote(metadata))
end
end
From what I can see, it would've been just simpler to make them into functions instead of macros.
It's probably because of the __CALLER__ special form, that provides info about the calling context, including file and line, but is only available in macros.
Turns out, another reason is so that the calling code can be stripped out during compile time for the logging levels that the application doesn't want.
From the ElixirForum discussion:
It is to allow the code to just entirely not exist for logging levels that you do not want, so the code is never even called and incurs no speed penalty, thus you only pay for the logging levels that you are actively logging.
and
It's so the entire Logger call can be stripped out of the code at compile time if you use :compile_time_purge_level

with hunchentoot I cannot generate web pages

I am learning common lisp and trying to use hunchentoot to develop web apps.
With the code below I cannot manage to see the page defined in the retro-games function definition on the browser. I expect it to be generated by this function.
I write address as:
http://localhost:8080/retro-games.htm.
What displayed on the browser is Resource /retro-games.htm not found, the message and the lisp logo at the default page, that I could display. I can display the hunchentoot's default page.
(ql:quickload "hunchentoot")
(ql:quickload "cl-who")
(defpackage :retro-games
(:use :cl :cl-who :hunchentoot))
(in-package :retro-games);i evaluate this from toplevel otherwise it does not change to this package.
(start (make-instance 'hunchentoot:acceptor :port 8080))
(defun retro-games ()
(with-html-output (*standard-output* nil :prologue t)
(:html (:body "Not much there"))
(values)))
(push (create-prefix-dispatcher "/retro-games.htm" 'retro-games) *dispatch-table*)
The two loadings at the beginning were successful.
What am I missing?
Hunchentoot's API has changed a bit since that was written. The behaviour of an acceptor assumed by that article is now found in easy-acceptor. Acceptor is a more general class now, which you can use for your own dispatch mechanism, if you are so inclined.
So, instead of (make-instance 'hunchentoot:acceptor #|...|#), use (make-instance 'hunchentoot:easy-acceptor #|...|#), and it should work.
Default implementation of the acceptor's request dispatch method, generates an HTTP Not Found error. So, you need to subclass acceptor class and redefine acceptor-dispatch-request method in subclass to make it actually dispatch requests. For example see documentation.
easy-acceptor works because it defines acceptor-dispatch-request to use *dispatch-table* for routing.

Why the heck is Rails 3.1 / Sprockets 2 / CoffeeScript adding extra code?

Working with Rails 3.1 (rc5), and I'm noticing that any coffeescript file I include rails (or sprockets) is adding in initializing javascript at the top and bottom. In other words, a blank .js.coffee file gets outputted looking like this:
(function() {
}).call(this);
This is irritating because it screws up my javascript scope (unless I really don't know what I'm doing). I generally separate all of my javascript classes into separate files and I believe that having that function code wrapping my classes just puts them out of scope from one another. Or, atleast, I can't seem to access them as I am continually getting undefined errors.
Is there a way to override this? It seems like this file in sprockets has to do with adding this code:
https://github.com/sstephenson/sprockets/blob/master/lib/sprockets/jst_processor.rb
I understand that wrapping everything in a function might seem like an added convenience as then nothing is ran until DOM is loaded, but as far as I can tell it just messes up my scope.
Are you intending to put your objects into the global scope? I think CoffeeScript usually wraps code in anonymous functions so that it doesn't accidentally leak variables into the global scope. If there's not a way to turn it off, your best bet would probably be to specifically add anything you want to be in the global scope to the window object:
window.myGlobal = myGlobal;
It seems to be a javascript best practice these days to put code inside a function scope and be explicit about adding objects to the global scope, and it's something I usually see CoffeeScript do automatically.
You don't want to put everything into the global scope. You want a module or module like system where you can namespace things so you don't colide with other libraries. Have a read of
https://github.com/jashkenas/coffee-script/wiki/Easy-modules-with-coffeescript

Why am I not seeing my macro-created functions in the new slime session? (clojure)

In my clojure code, I have a few functions which are created with calls to custom macros. Typically, the macros would take a data structure of some sort and create a method from it.
This is a contrived example:
(create-function {:name "view-data" ...})
which would create a new function called view-data. (My database queries are data-driven, so I can create a function with an indicative name that calls a specific query)
My problem is that when I run the mvn clojure:swank target and connect to the slime session from emacs these functions aren't visible. I have to visit the file and compile it myself with C-c C-k for the functions to be created.
The maven output suggests that the files themselves compile fine, but the slime session doesn't know about the functions.
Any ideas why this might be happening?
I have a file in my project that requires all the namespaces which makes all the functions from every where available in the repl. perhaps there is a more slime-elegant way of doing this, but this hack has been very reliable for me.
Note that in clojure, compiling and loading are separate steps. You can generate all the class files you like, but if they're not loaded, it won't affect the running process.
I don't know enough about clojure:swank for maven, but it sounds to me that, like leiningen, the swank target will only set up the classpath for your project and load the swank code but not any of the code in your project. So you will still have load load your code in some way after that (for instance; from Emacs/SLIME, using some other target/plugin or from the REPL).