I have to converts character encoding when write to a file, but I when use fuction "reencode-output-port", the last part of the string will not be written into the file.
For example, I run this piece of code:
(call-with-output-file "test.txt" (lambda (out)
(display "abc" (reencode-output-port out "gbk"))) #:exists 'truncate)
File test.txt does not contain anything.
When I write string long enough to the file, the last part always lost.
Is there any problem about my codes?
I have realized that I didn't close the new file port. The correct code is:
(call-with-output-file "test.txt" (lambda (out)
(let [(re-out (reencode-output-port out "gbk"))] (display "abc" re-out)
(close-output-port re-out)))
#:exists 'truncate)
That's, binding the new file port to re-out, and close it when done.
But I am just wondering why call-with-output-file does not contain an encoding parameter? I use the call-with-output-file avoid a close action, now I have to face another one.
Does this work?
(call-with-output-file "test.txt" (lambda (out)
(current-output-port (reencode-output-port (current-output-port) "gbk"))
(display "abc")
#:exists 'truncate))
Also what happens if you use a other encodings than gbk ?
Related
I've got some json that I'd like to process in emacs. I've found and used the elisp library to extract the desired content from the json, and I'd like to replace the json with the elisp equiivalent that I've extracted.
This is what I've written:
(defun extract-foo (start end)
"Extract the foo field from a json object in the region"
(interactive "r")
(let ((my_json (cdr (assoc 'FOO (json-read-from-string (buffer-substring-no-properties start end))))))
(delete-region start end)
(SOMETHING)
))
I'm stuck at the something. I can't seem to find a way to write the contents of my_json to the buffer at the mark. The only way I can think of is to save the text instead to a temporary buffer, and then (insert-buffer) it. This seems excessive to me though.
How can I do this idiomatically in elisp?
to write back JSON partial
(insert (format "%s" (json-encode my-json)))
to write back elisp:
(insert (format "%s" my-json))
Note that your use of underscore in variable naming is contrary to convention.
I'm porting some of my python scripts to Common Lisp. I need to get list of files in some directory and print each file's content by lines.
This code shows me all filenames. But lines are printed only for last file. Why?
Also, what is the best way ti iterate by file lines?
Thanks.
(dolist (file (directory (make-pathname :name :wild
:type :wild
:defaults "path\\to\\files\\")))
(print file)
(with-open-file (stream file)
(do ((line (read-line stream) (read-line stream)))
(nil t)
(print line))))
I would propose to write a function which prints a file given a pathname and a stream.
You iterate with DO. That's okay, but I would use something like LOOP which allows slightly easier to read code.
Your DO is an endless loop. You might want to end the loop when the EOF is reached.
READ-LINE generates an error when it reads past the end of the file. Thus your code signals an error on the end of the first file. This error causes your code to only print the first file.
You need to call READ-LINE such a way that you test for EOF and end the iteration then. See the arguments to READ-LINE. Alternatively you can catch the EOF error, but the other solution is slightly easier.
This seems to work for me:
(dolist (file (directory (make-pathname :name :wild
:defaults "/tmp/lt/files/")))
(print file)
(with-open-file (stream file)
(do ((line (read-line stream nil) (read-line stream nil)))
((null line))
(print line))))
I need to process a file of certain types with external command line program accepting single argument (filename) and then use file modified by this program either open modified file or accept output of command line program as data source for file.
Any way to do this?
Where I used to work there were some binary files that I wanted to view in emacs. The way I did this was to add to jka-compr-compression-info-list like the following for editing applescripts:
(add-to-list 'jka-compr-compression-info-list
["\\.scpt\\'"
"Compiling" "osacompile-helper.sh" nil
"Decompiling" "osacompile-helper.sh" ("-d")
nil nil "Fasd"])
(jka-compr-update)
Here osacompile-helper.sh is just a little shell wrapper around osacompile and osadecompile that reads from stdin and writes to stdout (which is required). You also need to turn on auto-compression-mode, although I think that's the default. If you use the customize interface to change jka-compr-compression-info-list, instead of setting it directly, then you don't have to call jka-compr-update.
If you just want this to work when you open the file with C-x C-f, then you can probably just attach your behaviour to find-file, but deeper down I believe insert-file-contents is what eventually reads files in.
A cursory look doesn't seem to show any appropriate hook, so you could look at doing this with before advice.
(defadvice insert-file-contents
(before my-before-insert-file-contents-advice)
"Process files externally before reading them."
(let ((filename (expand-file-name (ad-get-arg 0))))
(message "About to read file %s" filename)
;; your code here.
;; ;; stupid unsafe example:
;; (let ((file (shell-quote-argument filename))
;; (tempfile (shell-quote-argument (make-temp-file "some-prefix-"))))
;; (shell-command (format "sort %s >%s" file tempfile))
;; (shell-command (format "mv %s %s" tempfile file)))
))
(ad-activate 'insert-file-contents)
You might like to elaborate on your requirements, in case you don't actually need to clobber the original file? (which I think is a horrendous idea, frankly; I certainly wouldn't use code like this!)
For example, you could read in the original file, process it within the buffer (maybe using shell-command-on-region with the replace flag), and set the buffer as unmodified. That way you are only likely to save the changes made by the shell command if you make other edits to the file, and the mere act of loading the file into an editor hasn't actually modified it.
In any case, I trust you'll implement sensible backup processes into your code, and will be plenty paranoid when testing!
You can call the external program with shell-command, with the output directed to a new buffer. A minimal working example is:
(defun my-find-and-process-file ()
(interactive)
(let* ((file (read-file-name "File name: "))
(buf (pop-to-buffer file)))
(shell-command (format "cat %s" file) buf)))
Replace cat with the name of your program. This will create a buffer and fill it with the output of your program. If a buffer with the name of your file already exists, it will over-write it. If that's a possibility, you will want to change the buffer name to something safe by adding a suffix or something. This code also doesn't trigger any of the find-file hooks, so you'll have to manually select the mode, or modify the code to do that for you.
I have a small elisp script which applies Perl::Tidy on region or whole file. For reference, here's the script (borrowed from EmacsWiki):
(defun perltidy-command(start end)
"The perltidy command we pass markers to."
(shell-command-on-region start
end
"perltidy"
t
t
(get-buffer-create "*Perltidy Output*")))
(defun perltidy-dwim (arg)
"Perltidy a region of the entire buffer"
(interactive "P")
(let ((point (point)) (start) (end))
(if (and mark-active transient-mark-mode)
(setq start (region-beginning)
end (region-end))
(setq start (point-min)
end (point-max)))
(perltidy-command start end)
(goto-char point)))
(global-set-key "\C-ct" 'perltidy-dwim)
I'm using current Emacs 23.1 for Windows (EmacsW32). The problem I'm having is that if I apply that script on a UTF-8 coded file ("U(Unix)" in the status bar) the output comes back Latin-1 coded, i.e. two or more characters for each non-ASCII source character.
Is there any way I can fix that?
EDIT: Problem seems to be solved by using (set-terminal-coding-system 'utf-8-unix) in my init.el. In anyone has other solutions, go ahead and write them!
Below are from shell-command-on-region document
To specify a coding system for converting non-ASCII characters
in the input and output to the shell command, use C-x RET c
before this command. By default, the input (from the current buffer)
is encoded using coding-system specified by `process-coding-system-alist',
falling back to `default-process-coding-system' if no match for COMMAND
is found in `process-coding-system-alist'.
During executing, it looks for coding system from process-coding-system-alist at first, if it's nil, then looks from default-process-coding-system.
If your want to change the encoding, you can add your converting option to process-coding-system-alist, below are the content of it.
Value: (("\\.dz\\'" no-conversion . no-conversion)
...
("\\.elc\\'" . utf-8-emacs)
("\\.utf\\(-8\\)?\\'" . utf-8)
("\\.xml\\'" . xml-find-file-coding-system)
...
("" undecided))
Or, if you didn't set process-coding-system-alist, it's nil, you could assign your encoding option to default-process-coding-system,
for example:
(setq default-process-coding-system '(utf-8 . utf-8))
(If input is encoded as utf-8, then output encoded as utf-8)
Or
(setq default-process-coding-system '(undecided-unix . iso-latin-1-unix))
I also wrote a post about this if you want details.
Quoting the documentation for shell-command-on-region (C-h f shell-command-on-region RET):
To specify a coding system for converting non-ASCII characters
in the input and output to the shell command, use C-x RET c
before this command. By default, the input (from the current buffer)
is encoded in the same coding system that will be used to save the file,
`buffer-file-coding-system'. If the output is going to replace the region,
then it is decoded from that same coding system.
The noninteractive arguments are START, END, COMMAND,
OUTPUT-BUFFER, REPLACE, ERROR-BUFFER, and DISPLAY-ERROR-BUFFER.
Noninteractive callers can specify coding systems by binding
`coding-system-for-read' and `coding-system-for-write'.
In other words, you'd do something like
(let ((coding-system-for-read 'utf-8-unix))
(shell-command-on-region ...) )
This is untested, not sure what the value of coding-system-for-read (or perhaps -write instead? or as well?) should be in your case. I guess you could also utilize the OUTPUT-BUFFER argument and direct the output to a buffer whose coding system is set to what you need it to be.
Another option might be to wiggle the locale in the perltidy invocation, but again, without more information about what you are using now, and no means to experiment on a system similar to yours, I can only hint.
I would like emacs to mark files that are generated as read-only when they're opened. The part of the puzzle that I'm missing is how to check if a file "exists". I currently have the following:
;;
;; get file extension
;;
(defun get-ext (file-name)
(car (cdr (split-string file-name "\\."))))
;;
;; get the base name of the file
;;
(defun base-name (file-name)
(car (split-string file-name "\\.")))
;;
;; if an 'lzz' file exists for this header, mark it as read only
;;
(defun mark-read-only ()
(if (string= (get-ext (cur-file)) "h")
(if ( ??file-exists??? (concat (base-name (cur-file)) ".lzz") )
(toggle-read-only))))
What can I use for "???file-exists???"?
Once I find this, I'll add "mark-read-only" to the appropriate hook (which I think is the find-file-hook).
BACKGROUND
We use lzz as a code generator to simplify our C/C++ development process. Briefly, lzz takes a single input file (which looks very like C/C++) and generates header and source files as appropriate.
By default, lzz includes #line directives so that the debugger points to the original source and not the generated source, however, to reduce compilation dependencies we normally disable these directives in header files. The result is that when debugging templates or inline functions, the debugger normally points to the generated header file and not the original source file.
This is not a big deal, however, recently I've found that when debugging I'll make a quick modification to the displayed file and then I'll rebuild. Of course this normally means that the change I made disappears because the file I edited is generated and so the changes are "blown away" during the library rebuild.
SOLUTION
Thanks to everyone for their help and comments. A special thanks to cobbal for pointing out the correct function to use.
Here's the resulting code (with updates based on the other comments here too):
(defun cur-file ()
"Return the filename (without directory) of the current buffer"
(file-name-nondirectory (buffer-file-name (current-buffer)))
)
(defun mark-generated-as-read-only ()
"Mark generated source files as read only.
Mark generated files (lzz or gz) read only to avoid accidental updates."
(if
(or (string= (file-name-extension (cur-file)) "h")
(string= (file-name-extension (cur-file)) "cpp"))
(cond
(
(file-exists-p (concat (file-name-sans-extension (cur-file)) ".lzz"))
(toggle-read-only))
(
(file-exists-p (concat (file-name-sans-extension (cur-file)) ".gz") )
(toggle-read-only))
)
)
)
try file-exists-p
"Return t if file filename exists (whether or not you can read it.)".
Note that it's not spesific to files and works for directories too.
Depending on what you need, you might want file-readable-p instead of file-exists-p.
Apropos will only get you so far. Icicles provides apropos completion and progressive completion which let you find help easily for command, function, variable, etc. names that match subparts in an arbitrary order (is it file-exists-p or exists-file-p?).
Use f.el, modern library for file and directory manipulation. You can use f-exists?, f-file?, f-directory? and many other predicates. The library is better than standard functions, because it's every file related function you'll ever need under one namespace.