I read here that "it is permissible for an implementation to ignore" the dynamic-extent declaration in Common Lisp, and I was wondering if it is in fact ignored in the CLISP implementation.
I have tried testing with the following code:
(let ((b (cons 1 2)))
(declare (dynamic-extent b))
(list b))
Which returns:
((1 . 2))
My guess is that it is ignored, but I wanted to be sure.
Also, if it is ignored, is there a way for me to explicitly allocate memory to the stack rather than the heap?
> is there a way for me to explicitly allocate memory to the stack rather than the heap?
No, and you can be thankful of it because it eliminates an entire category of bugs from the programs: there can be no "dangling pointers" to already dead objects that make your program crash.
Furthermore, with CLISP or similar implementations, you don't need stack-allocated memory because:
The garbage collector eliminates short-lived objects quickly, and CLISP's garbage collector consumes usually less than 10% of the CPU time.
CLISP's garbage collector is generational, which means that collecting short-lived object is particularly fast. And once a short-lived object has been collected, the next short-lived object will be allocated in the same small memory area - so you have a similar speed improvement (by use of locality) as with stacks.
Finally, insisting on stack allocation of objects prevents you from freely choosing the programming style that is adapted to your problem. Lisp supports many programming styles: functional, procedural, object-oriented, pattern-based, logic, relational, rule, goal-oriented, and more. By requesting stack allocation, you restrain yourself to functional and procedural programming style; this really does not bring you forward.
Yes, CLISP ignores the dynamic-extent declaration.
See CLISP Implementation Notes, which do not mention it explicitly yet (the beta version does).
If you want control over memory management (e.g., you enjoy debugging segfaults and memory leaks), you should be using C.
PS. An implementation that respects the dynamic-extent declaration would probably segfault on your code.
This is actually a great question! (have ingratiate myself because my karma is so low!)
As others have mentioned CLISP ignores declare dynamic-extent. However other implementations do obey it, and for good reason. Quoting SBCL:
SBCL has fairly extensive support for performing allocation on the
stack when a variable is declared dynamic-extent. The
dynamic-extent declarations are not verified, but are simply
trusted as long as sb-ext:*stack-allocate-dynamic-extent* is
true.
Also, keep in mind that CL declarations are promises made by the programmer to the lisp system. Declarations, in general, have no defined observable behavior either when the programmer keeps his promise or when they are broken.
The alway helpful and cheery SBCL documentation goes on to say:
If dynamic extent constraints specified in the Common Lisp standard
are violated, the best that can happen is for the program to have
garbage in variables and return values; more commonly, the system will
crash.
In particular, it is important to realize that dynamic extend is
contagious:
(let* ((a (list 1 2 3))
(b (cons a a)))
(declare (dynamic-extent b))
;; Unless A is accessed elsewhere as well, SBCL will consider
;; it to be otherwise inaccessible -- it can only be accessed
;; through B, after all -- and stack allocate it as well.
;;
;; Hence returning (CAR B) here is unsafe.
...)
The takeaway being: generally in lisp declarations don't result in the change of a conforming program's 'meaning'. All bets are off if your program runs counter to your declared intent. What the 'effect' of a declaration is, i.e. how the compiler optimizes the resulting code (and whether it optimizes it at all) vary from implementation to implementation and even from version to version.
Related
This question already has answers here:
What makes Lisp macros so special?
(15 answers)
Closed 3 months ago.
I keep reading that Lisp macros are one of the most powerful features of the language. But reading over the specifications and manuals, they are just functions whose arguments are unevaluated.
Given any macro (defmacro example (arg1 ... argN) (body-forms)) I could just write (defun example (arg1 ... argN) ... (body-forms)) with the last body-form turned into a list and then call it like (eval (example 'arg1 ... 'argN)) to emulate the same behavior of the macro. If this were the case, then macros would just be syntactic sugar, but I doubt that syntactic sugar would be called a powerful language feature. What am I missing? Are there cases where I cannot carry out this procedure to emulate a macro?
I can't talk about powerful because it can be a little bit subjective, but macros are regular Lisp functions that work on Lisp data, so they are as expressive as other functions. This isn't the case with templates or generic functions in other languages that rely more on static types and are more restricted (on purpose).
In some way, yes macros are simple syntactic facilities, but you are focused in your emulation on the dynamic semantics of macros, ie. how you can run code that evaluates macros at runtime. However:
the code using eval is not equivalent to expanded code
the preprocessing/compile-time aspect of macros is not emulated
Lexical scope
Function, like +, do not inherit the lexical scope:
(let ((x 30))
(+ 3 4))
Inside the definition of +, you cannot access x. Being able to do so is what "dynamic scope" is about (more precisely, see dynamic extent, indefinite scope variables). But nowadays it is quite the exception to rely on dynamic scope. Most functions use lexical scope, and this is the case for eval too.
The eval function evaluates a form in the null lexical environment, and it never has access to the surrounding lexical bindings. As such, it behaves like any regular function.
So, in you example, calling eval on the transformed source code will not work, since arg1 to argnN will probably be unbound (it depends on what your macro does).
In order to have an equivalent form, you have to inject bindings in the transformed code, or expand at a higher level:
(defun expand-square (var)
(list '* var var))
;; instead of:
(defun foo (x) (eval (expand-square 'x))) ;; x unbound during eval
;; inject bindings
(defun foo (x) (eval `(let ((z ,x)) (expand-square z))))
;; or expand the top-level form
(eval `(defun foo (x) ,(expand-square 'x)))
Note that macros (in Common Lisp) also have access to the lexical environment through &environment parameters in their lambda-list. The use of this environment is implementation dependent, but can be used to access the declarations associated with a variable, for example.
Notice also how in the last example you evaluate the code when defining the function, and not when running it. This is the second thing about macro.
Expansion time
In order to emulate macros you could locally replace a call to a macro by a form that emulates it at runtime (using let to captures all the bindings you want to see inside the expanded code, which is tedious), but then you would miss the useful aspect of macros that is: generating code ahead of time.
The last example above shows how you can quote defun and wrap it in eval, and basically you would need to do that for all functions if you wanted to emulate the preprocessing work done by macros.
The macro system is a way to integrate this preprocessing step in the language in a way that is simple to use.
Conclusion
Macros themselves are a nice way to abstract things when functions can't. For example you can have a more human-friendly, stable syntax that hides implementation details. That's how you define pattern-matching abilities in Common Lisp that make it look like they are part of the language, without too much runtime penalty or verbosity.
They rely on simple term-rewriting functions that are integrated in the language, but you can emulate their behavior either at compile-time or runtime yourself if you want. They can be used to perform different kinds of abstraction that are usually missing or more cumbersome to do in other languages, but are also limited: they don't "understand" code by themselves, they don't give access to all the facilities of the compiler (type propagation, etc.). If you want more you can use more advanced libraries or compiler tools (see deftransform), but macros at least are portable.
Macros are not just functions whose arguments are unevaluated. Macros are functions between programming languages. In other words a macro is a function whose argument is a fragment of source code of a programming language which includes the macro, and whose value is a fragment of source code of a language which does not include the macro (or which includes it in a simpler way).
In very ancient, very rudimentary, Lisps, before people really understood what macros were, you could simulate macros with things called FEXPRs combined with EVAL. A FEXPR was simply a function which did not evaluate its arguments. This worked in such Lisps only because they were completely dynamically scoped, and the cost of it working was that compilation of such things was not possible at all. Those are two enormous costs.
In any modern Lisp, this won't work at all. You can write a toy version of FEXPRs as a macro (this may be buggy):
(defmacro deffex (fx args &body body)
(assert (every (lambda (arg)
(and (symbolp arg)
(not (member arg lambda-list-keywords))))
args)
(args) "not a simple lambda list")
`(defmacro ,fx ,args
`(let ,(mapcar (lambda (argname argval)
`(,argname ',argval))
',args (list ,#args))
,#',body)))
So now we could try to write a trivial binding construct I'll call with using this thing:
(deffex with (var val form)
(eval `(let ((,var ,val)) ,form)))
And this seems to work:
> (with a 1 a)
1
Of course, we're paying the cost that no code which uses this construct can ever be compiled so all our programs will be extremely slow, but perhaps that is a cost we're willing to accept (it's not, but never mind).
Except, of course, it doesn't work, at all:
> (with a 1
(with b 2
(+ a b)))
Error: The variable a is unbound.
Oh dear.
Why doesn't it work? It doesn't work because Common Lisp is lexically scoped, and eval is a function: it can't see the lexical bindings.
So not only does this kind of approach prevent compilation in a modern Lisp, it doesn't work at all.
People often, at this point, suggest some kind of kludge solution which would allow eval to be able to see lexical bindings. The cost of such a solution is that all the lexical bindings need to exist in compiled code: no variable can ever be compiled away, not even its name. That's essentially saying that no good compilers can ever be used, even for the small part of your programs you can compile at all in a language which makes extensive use of macros like CL. For instance, if you ever use defun you're not going to be able to compile the code in its body. People do use defun occasionally, I think.
So this approach simply won't work: it worked by happenstance in very old Lisps but it can't work, even at the huge cost of preventing compilation, in any modern Lisp.
More to the point this approach obfuscates the understanding of what macros are: as I said at the start, macros are functions between programming languages, and understanding that is critical. When you are designing macros you are implementing a new programming language.
Say I have a hash table at runtime that has strings as keys. Can a macro have access to this information and build a let expression from it?
(define env (hash 'a 123 'b 321))
(magic-let env (+ a b)) ; 444
I know I can hack around with with identifier-binding by replacing non-defined identifiers with a lookup in the hash table but then shadowing will not work as in a normal let.
Tagging scheme too as I assume its macro system is similar.
No, you can’t do that. At least not the way you describe.
The general reason why you cannot access runtime values within macros is simple: macros are fully expanded at compile time. When your program is compiled, the runtime values simply do not exist. A program can be compiled, and the bytecode can be placed on another computer, which will run it weeks later. Macro-expansion has already happened. No matter what happens at runtime, the program isn’t going to change.
This guarantee turns out to be incredibly important for a multitude of reasons, but that’s too general a discussion for this question. It would be relevant to discuss a particular question, which is why bindings themselves need to be static.
In Racket, as long as you are within a module (i.e. not at the top-level/REPL), all bindings can be resolved statically, at compile-time. This is a very useful property in other programming languages, mostly because the compiler can generate much more efficiently optimized code, but it is especially important in Racket or Scheme. This is because of how the macro system operates: in a language with hygienic macros, scope is complicated.
This is actually a very good thing—it is robust enough to support very complex systems that would be much harder to manage without hygiene—but it introduces some constraints:
Since every binding can be a macro or a runtime value, the binding needs to be known ahead of time in order to perform program expansion. The compiler needs to know if it needs to perform macro expansion or simply emit a variable reference.
Additionally, scoping rules are much more intricate because macro-introduced bindings live in their own scope. Because of this, binding scopes do not need to be strictly lexical.
Your magic-let could not work quite as you describe because the compiler could not possibly deduce the bindings for a and b statically. However, all is not lost: you could hook into #%top, a magical identifier introduced by the expander when encountering an unbound identifier. You could use this to replace unbound values with a hash lookup, and you could use syntax parameters to adjust #%top hygienically within each magic-let. Here’s an example:
#lang racket
(require (rename-in racket/base [#%top base-#%top])
racket/stxparam)
(define-syntax-parameter #%top (make-rename-transformer #'base-#%top))
(define-syntax-rule (magic-let env-expr expr ...)
(let ([env env-expr])
(syntax-parameterize ([#%top (syntax-rules ()
[(_ . id) (hash-ref env 'id)])])
(let () expr ...))))
(magic-let (hash 'a 123 'b 321) (+ a b)) ; => 444
Of course, keep in mind that this would replace all unbound identifiers with hash lookups. The effects of this are twofold. First of all, it will not shadow identifiers that are already bound:
(let ([a 1])
(magic-let (hash 'a 2)
a)) ; => 1
This is probably for the best, just to keep things semi-sane. It also means that the following would raise a runtime exception, not a compile-time error:
(magic-let (hash 'a 123) (+ a b))
; hash-ref: no value found for key
; key: 'b
I wouldn’t recommend doing this, as it goes against a lot of the Racket philosophy, and it would likely cause some hard-to-find bugs. There’s probably a better way to solve your problem without abusing things like #%top. Still, it is possible, if you really want it.
I've learned Clojure previously and really like the language. I also love Emacs and have hacked some simple stuff with Emacs Lisp. There is one thing which prevents me mentally from doing anything more substantial with Elisp though. It's the concept of dynamic scoping. I'm just scared of it since it's so alien to me and smells like semi-global variables.
So with variable declarations I don't know which things are safe to do and which are dangerous. From what I've understood, variables set with setq fall under dynamic scoping (is that right?) What about let variables? Somewhere I've read that let allows you to do plain lexical scoping, but somewhere else I read that let vars also are dynamically scoped.
I quess my biggest worry is that my code (using setq or let) accidentally breaks some variables from platform or third-party code that I call or that after such call my local variables are messed up accidentally. How can I avoid this?
Are there a few simple rules of thumb that I can just follow and know exactly what happens with the scope without being bitten in some weird, hard-to-debug way?
It isn't that bad.
The main problems can appear with 'free variables' in functions.
(defun foo (a)
(* a b))
In above function a is a local variable. b is a free variable. In a system with dynamic binding like Emacs Lisp, b will be looked up at runtime. There are now three cases:
b is not defined -> error
b is a local variable bound by some function call in the current dynamic scope -> take that value
b is a global variable -> take that value
The problems can then be:
a bound value (global or local) is shadowed by a function call, possibly unwanted
an undefined variable is NOT shadowed -> error on access
a global variable is NOT shadowed -> picks up the global value, which might be unwanted
In a Lisp with a compiler, compiling the above function might generate a warning that there is a free variable. Typically Common Lisp compilers will do that. An interpreter won't provide that warning, one just will see the effect at runtime.
Advice:
make sure that you don't use free variables accidentally
make sure that global variables have a special name, so that they are easy to spot in source code, usually *foo-var*
Don't write
(defun foo (a b)
...
(setq c (* a b)) ; where c is a free variable
...)
Write:
(defun foo (a b)
...
(let ((c (* a b)))
...)
...)
Bind all variables you want to use and you want to make sure that they are not bound somewhere else.
That's basically it.
Since GNU Emacs version 24 lexical binding is supported in its Emacs Lisp. See: Lexical Binding, GNU Emacs Lisp Reference Manual.
In addition to the last paragraph of Gilles answer, here is how RMS argues in favor of dynamic scoping in an extensible system:
Some language designers believe that
dynamic binding should be avoided, and
explicit argument passing should be
used instead. Imagine that function A
binds the variable FOO, and calls the
function B, which calls the function
C, and C uses the value of FOO.
Supposedly A should pass the value as
an argument to B, which should pass it
as an argument to C.
This cannot be done in an extensible
system, however, because the author of
the system cannot know what all the
parameters will be. Imagine that the
functions A and C are part of a user
extension, while B is part of the
standard system. The variable FOO does
not exist in the standard system; it
is part of the extension. To use
explicit argument passing would
require adding a new argument to B,
which means rewriting B and everything
that calls B. In the most common case,
B is the editor command dispatcher
loop, which is called from an awful
number of places.
What's worse, C must also be passed an
additional argument. B doesn't refer
to C by name (C did not exist when B
was written). It probably finds a
pointer to C in the command dispatch
table. This means that the same call
which sometimes calls C might equally
well call any editor command
definition. So all the editing
commands must be rewritten to accept
and ignore the additional argument. By
now, none of the original system is
left!
Personally, I think that if there is a problem with Emacs-Lisp, it is not dynamic scoping per se, but that it is the default, and that it is not possible to achieve lexical scoping without resorting to extensions. In CL, both dynamic and lexical scoping can be used, and -- except for top-level (which is adressed by several deflex-implementations) and globally declared special variables -- the default is lexical scoping. In Clojure, too, you can use both lexical and dynamic scoping.
To quote RMS again:
It is not necessary for dynamic scope to be the only scope rule provided, just useful
for it to be available.
Are there a few simple rules of thumb that I can just follow and know exactly what happens with the scope without being bitten in some weird, hard-to-debug way?
Read Emacs Lisp Reference, you'll have many details like this one :
Special Form: setq [symbol form]...
This special form is the most common method of changing a
variable's value. Each SYMBOL is given a new value, which is the
result of evaluating the corresponding FORM. The most-local
existing binding of the symbol is changed.
Here is an example :
(defun foo () (setq tata "foo"))
(defun bar (tata) (setq tata "bar"))
(foo)
(message tata)
===> "foo"
(bar tata)
(message tata)
===> "foo"
As Peter Ajtai pointed out:
Since emacs-24.1 you can enable lexical scoping on a per file basis by putting
;; -*- lexical-binding: t -*-
on top of your elisp file.
First, elisp has separate variable and function bindings, so some pitfalls of dynamic scoping are not relevant.
Second, you can still use setq to set variables, but the value set does not survive the exit of the dynamic scope it is done in. This isn't, fundamentally, different from lexical scoping, with the difference that with dynamic scoping a setq in a function you call can affect the value you see after the function call.
There's lexical-let, a macro that (essentially) imitates lexical bindings (I believe it does this by walking the body and changing all occurrences of the lexically let variables to a gensymmed name, eventually uninterning the symbol), if you absolutely need to.
I'd say "write code as normal". There are times when the dynamic nature of elisp will bite you, but I've found that in practice that is surprisingly seldom.
Here's an example of what I was saying about setq and dynamically-bound variables (recently evaluated in a nearby scratch buffer):
(let ((a nil))
(list (let ((a nil))
(setq a 'value)
a)
a))
(value nil)
Everything that has been written here is worthwhile. I would add this: get to know Common Lisp -- if nothing else, read about it. CLTL2 presents lexical and dynamic binding well, as do other books. And Common Lisp integrates them well in a single language.
If you "get it" after some exposure to Common Lisp then things will be clearer for you for Emacs Lisp. Emacs 24 uses lexical scoping to a greater extent by default than older versions, but Common Lisp's approach will still be clearer and cleaner (IMHO). Finally, it is definitely the case that dynamic scope is important for Emacs Lisp, for the reasons that RMS and others have emphasized.
So my suggestion is to get to know how Common Lisp deals with this. Try to forget about Scheme, if that is your main mental model of Lisp -- it will limit you more than help you in understanding scoping, funargs, etc. in Emacs Lisp. Emacs Lisp, like Common Lisp, is "dirty and low-down"; it is not Scheme.
Dynamic and lexical scoping have different behaviors when a piece of code is used in a different scope than the one it was defined in. In practice, there are two patterns that cover most troublesome cases:
A function shadows a global variable, then calls another function that uses that global variable.
(defvar x 3)
(defun foo ()
x)
(defun bar (x)
(+ (foo) x))
(bar 0) ⇒ 0
This doesn't come up often in Emacs because local variables tend to have short names (often single-word) whereas global variables tend to have long names (often prefixed by packagename-). Many standard functions have names that are tempting to use as local variables like list and point, but functions and variables live in separate name spaces are local functions are not used very often.
A function is defined in one lexical context and used outside this lexical context because it's passed to a higher-order function.
(let ((cl-y 10))
(mapcar* (lambda (elt) (* cl-y elt)) '(1 2 3)))
⇒ (10 20 30)
(let ((cl-x 10))
(mapcar* (lambda (elt) (* cl-x elt)) '(1 2 3)))
⇑ (wrong-type-argument number-or-marker-p (1 2 3))
The error is due to the use of cl-x as a variable name in mapcar* (from the cl package). Note that the cl package uses cl- as a prefix even for its local variables in higher-order functions. This works reasonably well in practice, as long as you take care not to use the same variable as a global name and as a local name, and you don't need to write a recursive higher-order function.
P.S. Emacs Lisp's age isn't the only reason why it's dynamically scoped. True, in those days, lisps tended towards dynamic scoping — Scheme and Common Lisp hadn't really taken on yet. But dynamic scoping is also an asset in a language targeted towards extending a system dynamically: it lets you hook into more places without any special effort. With great power comes great rope to hang yourself: you risk accidentally hooking into a place you didn't know about.
The other answers are good at explaining the technical details on how to work with dynamic scoping, so here's my non-technical advice:
Just do it
I've been tinkering with Emacs lisp for 15+ years and don't know that I've ever been bitten by any problems due to the differences between lexical/dynamic scope.
Personally, I've not found the need for closures (I love 'em, just don't need them for Emacs). And, I generally try to avoid global variables in general (whether the scoping was lexical or dynamic).
So I suggest jumping in and writing customizations that suit your needs/desires, chances are you won't have any problems.
I entirely feel your pain. I find the lack of lexical binding in emacs rather annoying - especially not being able to use lexical closures, which seems to be a solution I think of a lot, coming from more modern languages.
While I don't have any more advice on working around the lacking features that the previous answers didn't cover yet, I'd like to point out the existance of an emacs branch called `lexbind', implementing lexical binding in a backward-compatible way. In my experience lexical closures are still a little buggy in some circumstances, but that branch appears to a promising approach.
Just don't.
Emacs-24 lets you use lexical-scope. Just run
(setq lexical-binding t)
or add
;; -*- lexical-binding: t -*-
at the beginning of your file.
Peter Norvig mentions in Paradigms of Artificial Intelligence Programming, on page 50, the trade off between specificity and consistency and when choosing to use setq or setf to update a variable to a value. What do you recommend? Have you ever run into a situation where it mattered much beyond readability?
Using setq is more low-level, but the performance of setf is not a problem. And setf allows you (or library writers) to provide custom setf behavior, like setting parts of custom data structures. I say: go with setf everywhere unless you have a reason not to.
Also see Practical Common Lisp, chapter 3: "The SETF macro is Common Lisp's main assignment operator." PCL is available online for free: http://gigamonkeys.com/book/
FWIW, I always use setf. If I change the structure of my code slightly, I just need to change the "place" instead of the place and the operator (setq -> setf).
Also, don't worry about performance, setf is exactly the same as setq for symbols:
CL-USER> (macroexpand '(setf foo 42))
(SETQ FOO 42)
You can use setf wherever you could use setq. In fact, setf is actually a macro which builds on setq. So, this should be purely a readability and style issue.
Almost all the code I've seen avoids the use of setq and uses setf.
setf is "set field", it changes a place and can have user extensions.
setq is set with quoting first argument.
I recommend you follow Norvig's final advice in that section: be consistent. 'Readability' is of course the most important reason to make any choice in programming. If it is important to communicate to the reader (perhaps you in 2 months' time) that you are dealing with the entire value cell of a symbol, then use setq; otherwise use setf. But only if you're being consistent.
You will not be wrong if you used setf everywhere instead of setq.
It's the things like this that drags Common Lisp from going forward, lots of unused stuff that implementors still need to support.
I've been working through Practical Common Lisp and as an exercise decided to write a macro to determine if a number is a multiple of another number:
(defmacro multp (value factor)
`(= (rem ,value ,factor) 0))
so that :
(multp 40 10)
evaluates to true whilst
(multp 40 13)
does not
The question is does this macro leak in some way? Also is this "good" Lisp? Is there already an existing function/macro that I could have used?
Siebel gives an extensive rundown (for simple cases anyway) of possible sources of leaks, and there aren't any of those here. Both value and factor are evaluated only once and in order, and rem doesn't have any side effects.
This is not good Lisp though, because there's no reason to use a macro in this case. A function
(defun multp (value factor)
(zerop (rem value factor)))
is identical for all practical purposes. (Note the use of zerop. I think it makes things clearer in this case, but in cases where you need to highlight, that the value you're testing might still be meaningful if it's something other then zero, (= ... 0) might be better)
Your macro looks fine to me. I don't know what a leaky macro is, but yours is pretty straightforward and doesn't require any gensyms. As far as if this is "good" Lisp, my rule of thumb is to use a macro only when a function won't do, and in this case a function can be used in place of your macro. However, if this solution works for you there's no reason not to use it.
Well, in principle, a user could do this:
(flet ((= (&rest args) nil))
(multp 40 10))
which would evaluate to NIL... except that ANSI CL makes it illegal to rebind most standard symbols, including CL:=, so you're on the safe side in this particular case.
In generial, of course, you should be aware of both referential untransparency (capturing identifiers from the context the macro is expanded in) and macro unhygiene (leaking identifiers to expanded code).
No, no symbol introduced in the macro's "lexical closure" is released to the outside.
Note that leaking isn't NECESSARILY a bad thing, even if accidental leaking almost always is. For one project I worked on, I found that a macro similar to this was useful:
(defmacro ana-and (&rest forms)
(loop for form in (reverse forms)
for completion = form then `(let ((it ,form))
(when it
,completion))
finally (return completion)))
This allowed me to get "short-circuiting" of things needed to be done in sequence, with arguments carried over from previous calls in the sequence (and a failure signalled by returning NIL). The specific context this code is from is for a hand-written parser for a configuration file that has a cobbled-together-enough syntax that writing a proper parser using a parser generator was more work than hand-rolling.