I'm pretty new to Emacs Lisp and still learning how to do some of the basics.
I have some text like [123] and I want to extract the number 123. I've goofed around with a few different attempts but I still can't seem to capture the number reliably. The closest I've gotten is extracting the character ].
Can anyone point me in a direction? My biggest struggle is in understanding how to capture the number once I've used search-forwards and search-backwards to capture the point positions of the brackets.
Thanks in advance!
Try
(when (re-search-forward "\\[\\([0-9]+\\)\\]" nil t)
(string-to-number (match-string 1)))
Alternatively, when the point is already on top of the number, thing-at-point may be more convenient:
(string-to-number (thing-at-point 'sexp))
Related
I'm running the following code on Emacs Lisp Interaction:
(defun square (x) (* x x))
(square (square (square 1001)))
which is giving me 1114476179152563777. However, the ((1001^2)^2)^2 is actually 1008028056070056028008001.
How is this possible?
#Barmar's answer is accurate for Emacs versions < 27.
In Emacs 27 bignum support has been added. NEWS says:
** Emacs Lisp integers can now be of arbitrary size.
Emacs uses the GNU Multiple Precision (GMP) library to support
integers whose size is too large to support natively. The integers
supported natively are known as "fixnums", while the larger ones are
"bignums". The new predicates 'bignump' and 'fixnump' can be used to
distinguish between these two types of integers.
All the arithmetic, comparison, and logical (a.k.a. "bitwise")
operations where bignums make sense now support both fixnums and
bignums. However, note that unlike fixnums, bignums will not compare
equal with 'eq', you must use 'eql' instead. (Numerical comparison
with '=' works on both, of course.)
Since large bignums consume a lot of memory, Emacs limits the size of
the largest bignum a Lisp program is allowed to create. The
nonnegative value of the new variable 'integer-width' specifies the
maximum number of bits allowed in a bignum. Emacs signals an integer
overflow error if this limit is exceeded.
Several primitive functions formerly returned floats or lists of
integers to represent integers that did not fit into fixnums. These
functions now simply return integers instead. Affected functions
include functions like 'encode-char' that compute code-points, functions
like 'file-attributes' that compute file sizes and other attributes,
functions like 'process-id' that compute process IDs, and functions like
'user-uid' and 'group-gid' that compute user and group IDs.
and indeed using my 27.0.50 build:
(defun square (x) (* x x))
square
(square (square (square 1001)))
1008028056070056028008001
Emacs Lisp doesn't implement bignums, it uses the machine's integer type. The range of integers it supports is between most-negative-fixnum and most-positive-fixnum. On a 64-bit system, most-positive-fixnum will be 261-1, which has about 20 decimal digits.
See Integer Basics in the Elisp manual.
The correct result of your calculation is 25 digits, which is much larger than this. The calculation overflows and wraps around. It should be correct modulo 262.
You could use floating point instead. It has a much larger range, although very large numbers lose precision.
(square (square (square 1001.0)))
1.008028056070056e+24
I have to run some computations in Racket that I have never used before.
How do I force it to calculate sth in single or half (if it has those) precision floats?
I figured out how to make it compute in big floats:
(bf/ (bf 1) (bf 7))
I know that the abbreviation for floats (double precision) is fl. I cannot figure out the right abbreviation for single floats though.
The 'bigfloat' package you refer to are for arbitrary precision floating point numbers. You're very unlikely to want these, as you point out.
It sounds like you're looking for standard IEEE 64-bit floating point numbers. Racket uses these by default, for all inexact arithmetic.
So, for instance:
(/ 1 pi)
produces
0.3183098861837907
One possible tripper-upper is that when dividing two rational numbers, the result will again be a rational number. So, for instance,
(/ 12347728 298340194)
produces
6173864/149170097
You can force inexact arithmetic either by using exact->inexact (always works), or by ensuring that your literals end with decimals (unless you're using the htdp languages).
So, for instance:
(/ 12347728.0 298340194.0)
produces
0.04138808061511148
Let me know if this doesn't answer your question....
This is in both common lisp (clisp and sbcl) and scheme (guile). While these are true:
(= 1/2 0.5)
(= 1/4 0.25)
This turns out to be false:
(= 1/5 0.2)
I checked the hyperspec, it says that "=" should check for mathematical equivalency despite the types of the arguments. What the heck is going on?
The problem is that 0.2 really is not equal to 1/5. Floating point numbers cannot represent 0.2 correctly, so the literal 0.2 is actually rounded to the nearest representable floating point number (0.200000001 or something like that). After this rounding occurs, the computer has no way of knowing that your number was originally 0.2 and not another nearby non-representable number (such as 0.20000000002).
As for the reason why 1/2 and 1/4 work its because floating point is a base 2 encoding and can accurately represent powers of two.
please read What every Computer Scientist should know about Floating Point Numbers
This actually depends on what is coerced to what. If you think about it, then rational is more precise, so it makes sense to coerce to rational for comparison, rather then to float, however, if you consciously want to compare numbers as being floats you can force it by doing something like below:
(declaim (inline float=))
(defun float= (a b)
(= (coerce a 'float) (coerce b 'float)))
(float= 0.2 1/5) ; T
Actually... there's more to it, since floats provide you with things like not-a-number, positive-infinity and negative-infinity. Infinities, for example, for the 64-bit floats are 10e200 iirc, so, there's nothing stopping you from creating a rational larger then infinity (or smaller then negative infinity!), so, perhaps, if you want to be super precise, you'd need to consider those cases too. Likewise a comparison to not-a-number must always give you nil...
However, in scheme you have exact numbers, so you may ask (note the #e prefix, which means the number which follows is to be treated exactly):
> (= 1/5 #e0.2)
#t
i'm trying to get a first lisp program to work using the CLISP implementation, by typing
(print (mod (+ (* 28433 (expt 2 7830457) 1)) (expt 10 10))))
in the REPL.
but it gives me *** - overflow during multiplication of large numbers. i thought lisp features arbitrary size/precision. how could that ever happen then?
Lisp's bignums may hold really large numbers, but they too have their limits.
In your case, you can combine exponentiation and modulus into a single procedure, e.g. as in http://en.wikipedia.org/wiki/Modular_exponentiation#Right-to-left_binary_method.
According to http://clisp.cons.org/impnotes/num-concepts.html the maximum size for a bignum is (2^2097088 - 1) and your 2^7830457 is much larger than that.
Perhaps you can look at breaking down that number - perhaps separate out a number of smaller 2^X factors...
Chances are there's a better way to solve the problem. I haven't made it that far on PE, but I know the few that I've done so far tend to have "aha!" solutions to problems that seem out of a computer programs range.
This one especially - 2^7830457 is a huge number -- try (format t "~r" (expt 2 160)). You might try to look at the problem in a new light and see if there's a way to look at it that you haven't thought of.
Lisp is a family of languages with dozens of dialects and hundreds of different implementations.
Computers have finite memory. Programs under some operating systems may have limitations about the memory size. Different Common Lisp implementations use different numeric libraries.
You may want to consult your CLISP manual for its limitations of its various data types.
CLisp provided the function "mod-expt" (or EXT:mod-expt)
[1]> (mod-expt 2 1000000 59)
53
which is pretty fast. And for your purpose that works.
I was reading about the subset-sums problem when I came up with what appears to be a general-purpose algorithm for solving it:
(defun subset-contains-sum (set sum)
(let ((subsets) (new-subset) (new-sum))
(dolist (element set)
(dolist (subset-sum subsets)
(setf new-subset (cons element (car subset-sum)))
(setf new-sum (+ element (cdr subset-sum)))
(if (= new-sum sum)
(return-from subset-contains-sum new-subset))
(setf subsets (cons (cons new-subset new-sum) subsets)))
(setf subsets (cons (cons element element) subsets)))))
"set" is a list not containing duplicates and "sum" is the sum to search subsets for. "subsets" is a list of cons cells where the "car" is a subset list and the "cdr" is the sum of that subset. New subsets are created from old ones in O(1) time by just cons'ing the element to the front.
I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic.
I am posting this because my impression before was that NP-complete problems tend to be intractable and that the best one can usually hope for is a heuristic, but this appears to be a general-purpose solution that will, assuming you have the CPU cycles, always give you the correct answer. How many other NP-complete problems can be solved like this one?
NP-complete problems are solvable, just not in polynomial time (as far as we know). That is, an NP-complete problem may have an O(n*2^n) algorithm that could solve it, but it won't have, for example, an O(n^3) algorithm to solve it.
Interestingly, if a quick (polynomial) algorithm was found for any NP-complete problem, then every problem in NP could be solved in polynomial time. This is what P=NP is about.
If I understand your algorithm correctly (and this is based more on your comments than on the code), then it is equivalent to the O(n*2^n) algorithm here. There are 2^n subsets, and since you also need to sum each subset, the algorithm is O(n*2^n).
One more thing about complexity - the O(whatever) only indicates how well a particular algorithm scales. You cannot compare two algorithms and say that one is faster than the other based on this. Big-O notation doesn't care about implementation details and optimisations - it is possible to write two implementations of the same algorithm with one being much faster than the other, even though they might both be O(n^2). One woman making babies is an O(n) operation, but the chances are that this is going to take a lot longer than most O(n*log(n)) sorts you perform. All you can say based on this is that sorting will be slower for very large values on n.
All of the NP-complete problems have solutions. As long as you're willing to spend the time to compute the answer, that is. Just because there's not an efficient algorithm, doesn't mean there isn't one. For example, you could just iterate over every potential solution, and you'll eventually get one. These problems are used all over the place in real-world computing. You just need to be careful about how a big a problem you set for yourself if you're going to need exponential time (or worse!) to solve it.
I am not sure what the runtime
complexity of it is, but appears that
with each element "sum" grows by, the
size of "subsets" doubles, plus one,
so it appears to me to at least be
quadratic.
If the run-time doubles for each increase in N, you're looking at an O(2^N) algorithm. That's also what I'd expect from visiting all subsets of a set (or all members of the powerset of a set), as that's exactly 2^N members (if you include rhe empty set).
The fact that adding or not adding an element to all hitherto-seen sets is fast doesn't mean that the total processing is fast.
What is going on here could be expressed much more simply using recursion:
(defun subset-sum (set sum &optional subset)
(when set
(destructuring-bind (head . tail) set
(or (and (= head sum) (cons head subset))
(subset-sum tail sum subset)
(subset-sum tail (- sum head) (cons head subset))))))
The two recursive calls at the end clearly show we are traversing a binary tree of depth n, the size of the given set. The number of nodes in the binary tree is O(2^n), as expected.
It's karpreducible to polynomial time. Reduce with Karp reduction to a decision problem O(nM) using a heap or binary search upper bounds is log(M*2^M)=logM+log(2^M)=logM+Mlog2 Ergo Time:O(nM)