Emacs opening and saving encoding - emacs

I have a Perl source file in utf-8 encoding, LF ending. It contains English and Chinese characters. The questions are:
1.When I open file, the encoding is windows-1251-unix. I have to run these commands:
Alt-x revert-buffer-with-coding-system
> Coding system for visited file (default nil):
utf-8-auto-unix
> Revert buffer from file file_name.pl?
y
How to automatically open it in utf-8-auto-unix?
2.When I edit the file and try to save it, Emacs gives me a question:
> Select coding system (default raw-text):
utf-8-auto-unix
How to automatically save the file in utf-8-auto-unix? And get rid of the question.

You could add this comment to the top of the file:
# -*- coding: utf-8 -*-

Use describe-variable(C-h v) to examine the variable current-language-environment; follow the customize link and set it to "UTF-8".

Related

Getting following error on generating language scorer on Deepspeech

File "generate_scorer_package", line 1
SyntaxError: Non-UTF-8 code starting with '\xea' in file generate_scorer_package on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
Before answering this question, I am going to make some assumptions:
Firstly, I believe you are following the DeepSpeech Playbook and are at the step in generating a kenlm.scorer file, as documented here
Secondly, I am going to assume that you are using a Python editor of some descrition, like PyCharm.
The error SyntaxError: Non-UTF-8 code starting with '\xea' in file generate_scorer_package on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details is not related to DeepSpeech; it is related the Python encoding of the file that is being executed.
Python 3 assumes that the encoding of the .py file is UTF-8; however some editors - particularly editors in other locales - can override this setting.
To force the file to UTF-8 encoding, add the following code to the top of the generate_scorer_package.py file:
# coding: utf8
NOTE: It MUST be at the top of the file
Alternatively, identify where in your editor the encoding is set, and change it.
See also these Stack Overflow questions that are similar:
SyntaxError: Non-UTF-8 code starting with '\x92' in file D:\AIAssistant\build\gui.py on line 92, but no encoding declared;
SyntaxError: Non-UTF-8 code starting with '\x82'

how to find the encoding type of configuration.properties files in powershell [duplicate]

This isn't really a programming question, is there a command line or Windows tool (Windows 7) to get the current encoding of a text file? Sure I can write a little C# app but I wanted to know if there is something already built in?
Open up your file using regular old vanilla Notepad that comes with Windows.
It will show you the encoding of the file when you click "Save As...".
It'll look like this:
Whatever the default-selected encoding is, that is what your current encoding is for the file.
If it is UTF-8, you can change it to ANSI and click save to change the encoding (or visa-versa).
I realize there are many different types of encoding, but this was all I needed when I was informed our export files were in UTF-8 and they required ANSI. It was a onetime export, so Notepad fit the bill for me.
FYI: From my understanding I think "Unicode" (as listed in Notepad) is a misnomer for UTF-16.
More here on Notepad's "Unicode" option: Windows 7 - UTF-8 and Unicdoe
If you have "git" or "Cygwin" on your Windows Machine, then go to the folder where your file is present and execute the command:
file *
This will give you the encoding details of all the files in that folder.
The (Linux) command-line tool 'file' is available on Windows via GnuWin32:
http://gnuwin32.sourceforge.net/packages/file.htm
If you have git installed, it's located in C:\Program Files\git\usr\bin.
Example:
C:\Users\SH\Downloads\SquareRoot>file *
_UpgradeReport_Files; directory
Debug; directory
duration.h; ASCII C++ program text, with CRLF line terminators
ipch; directory
main.cpp; ASCII C program text, with CRLF line terminators
Precision.txt; ASCII text, with CRLF line terminators
Release; directory
Speed.txt; ASCII text, with CRLF line terminators
SquareRoot.sdf; data
SquareRoot.sln; UTF-8 Unicode (with BOM) text, with CRLF line terminators
SquareRoot.sln.docstates.suo; PCX ver. 2.5 image data
SquareRoot.suo; CDF V2 Document, corrupt: Cannot read summary info
SquareRoot.vcproj; XML document text
SquareRoot.vcxproj; XML document text
SquareRoot.vcxproj.filters; XML document text
SquareRoot.vcxproj.user; XML document text
squarerootmethods.h; ASCII C program text, with CRLF line terminators
UpgradeLog.XML; XML document text
C:\Users\SH\Downloads\SquareRoot>file --mime-encoding *
_UpgradeReport_Files; binary
Debug; binary
duration.h; us-ascii
ipch; binary
main.cpp; us-ascii
Precision.txt; us-ascii
Release; binary
Speed.txt; us-ascii
SquareRoot.sdf; binary
SquareRoot.sln; utf-8
SquareRoot.sln.docstates.suo; binary
SquareRoot.suo; CDF V2 Document, corrupt: Cannot read summary infobinary
SquareRoot.vcproj; us-ascii
SquareRoot.vcxproj; utf-8
SquareRoot.vcxproj.filters; utf-8
SquareRoot.vcxproj.user; utf-8
squarerootmethods.h; us-ascii
UpgradeLog.XML; us-ascii
Another tool that I found useful: https://archive.codeplex.com/?p=encodingchecker
EXE can be found here
Install git ( on Windows you have to use git bash console). Type:
file --mime-encoding *
for all files in the current directory , or
file --mime-encoding */*
for the files in all subdirectories
Here's my take how to detect the Unicode family of text encodings via BOM. The accuracy of this method is low, as this method only works on text files (specifically Unicode files), and defaults to ascii when no BOM is present (like most text editors, the default would be UTF8 if you want to match the HTTP/web ecosystem).
Update 2018: I no longer recommend this method. I recommend using file.exe from GIT or *nix tools as recommended by #Sybren, and I show how to do that via PowerShell in a later answer.
# from https://gist.github.com/zommarin/1480974
function Get-FileEncoding($Path) {
$bytes = [byte[]](Get-Content $Path -Encoding byte -ReadCount 4 -TotalCount 4)
if(!$bytes) { return 'utf8' }
switch -regex ('{0:x2}{1:x2}{2:x2}{3:x2}' -f $bytes[0],$bytes[1],$bytes[2],$bytes[3]) {
'^efbbbf' { return 'utf8' }
'^2b2f76' { return 'utf7' }
'^fffe' { return 'unicode' }
'^feff' { return 'bigendianunicode' }
'^0000feff' { return 'utf32' }
default { return 'ascii' }
}
}
dir ~\Documents\WindowsPowershell -File |
select Name,#{Name='Encoding';Expression={Get-FileEncoding $_.FullName}} |
ft -AutoSize
Recommendation: This can work reasonably well if the dir, ls, or Get-ChildItem only checks known text files, and when you're only looking for "bad encodings" from a known list of tools. (i.e. SQL Management Studio defaults to UTF16, which broke GIT auto-cr-lf for Windows, which was the default for many years.)
A simple solution might be opening the file in Firefox.
Drag and drop the file into firefox
Press Ctrl+I to open the page info
and the text encoding will appear on the "Page Info" window.
Note: If the file is not in txt format, just rename it to txt and try again.
P.S. For more info see this article.
I wrote the #4 answer (at time of writing). But lately I have git installed on all my computers, so now I use #Sybren's solution. Here is a new answer that makes that solution handy from powershell (without putting all of git/usr/bin in the PATH, which is too much clutter for me).
Add this to your profile.ps1:
$global:gitbin = 'C:\Program Files\Git\usr\bin'
Set-Alias file.exe $gitbin\file.exe
And used like: file.exe --mime-encoding *. You must include .exe in the command for PS alias to work.
But if you don't customize your PowerShell profile.ps1 I suggest you start with mine: https://gist.github.com/yzorg/8215221/8e38fd722a3dfc526bbe4668d1f3b08eb7c08be0
and save it to ~\Documents\WindowsPowerShell. It's safe to use on a computer without git, but will write warnings when git is not found.
The .exe in the command is also how I use C:\WINDOWS\system32\where.exe from powershell; and many other OS CLI commands that are "hidden by default" by powershell, *shrug*.
you can simply check that by opening your git bash on the file location then running the command file -i file_name
example
user filesData
$ file -i data.csv
data.csv: text/csv; charset=utf-8
Some C code here for reliable ascii, bom's, and utf8 detection: https://unicodebook.readthedocs.io/guess_encoding.html
Only ASCII, UTF-8 and encodings using a BOM (UTF-7 with BOM, UTF-8 with BOM,
UTF-16, and UTF-32) have reliable algorithms to get the encoding of a document.
For all other encodings, you have to trust heuristics based on statistics.
EDIT:
A powershell version of a C# answer from: Effective way to find any file's Encoding. Only works with signatures (boms).
# get-encoding.ps1
param([Parameter(ValueFromPipeline=$True)] $filename)
begin {
# set .net current directoy
[Environment]::CurrentDirectory = (pwd).path
}
process {
$reader = [System.IO.StreamReader]::new($filename,
[System.Text.Encoding]::default,$true)
$peek = $reader.Peek()
$encoding = $reader.currentencoding
$reader.close()
[pscustomobject]#{Name=split-path $filename -leaf
BodyName=$encoding.BodyName
EncodingName=$encoding.EncodingName}
}
.\get-encoding chinese8.txt
Name BodyName EncodingName
---- -------- ------------
chinese8.txt utf-8 Unicode (UTF-8)
get-childitem -file | .\get-encoding
Looking for a Node.js/npm solution? Try encoding-checker:
npm install -g encoding-checker
Usage
Usage: encoding-checker [-p pattern] [-i encoding] [-v]
Options:
--help Show help [boolean]
--version Show version number [boolean]
--pattern, -p, -d [default: "*"]
--ignore-encoding, -i [default: ""]
--verbose, -v [default: false]
Examples
Get encoding of all files in current directory:
encoding-checker
Return encoding of all md files in current directory:
encoding-checker -p "*.md"
Get encoding of all files in current directory and its subfolders (will take quite some time for huge folders; seemingly unresponsive):
encoding-checker -p "**"
For more examples refer to the npm docu or the official repository.
Similar to the solution listed above with Notepad, you can also open the file in Visual Studio, if you're using that. In Visual Studio, you can select "File > Advanced Save Options..."
The "Encoding:" combo box will tell you specifically which encoding is currently being used for the file. It has a lot more text encodings listed in there than Notepad does, so it's useful when dealing with various files from around the world and whatever else.
Just like Notepad, you can also change the encoding from the list of options there, and then saving the file after hitting "OK". You can also select the encoding you want through the "Save with Encoding..." option in the Save As dialog (by clicking the arrow next to the Save button).
The only way that I have found to do this is VIM or Notepad++.
EncodingChecker
File Encoding Checker is a GUI tool that allows you to validate the text encoding of one or more files. The tool can display the encoding for all selected files, or only the files that do not have the encodings you specify.
File Encoding Checker requires .NET 4 or above to run.

Can I set the VS Code default EOL based on file type?

Working with .sh files on Windows causes issues when running them in Linux Docker containers if they have EOL of CRLF. Can I make VS Code always work with LF for specific file type such as .sh while still using CRLF for all others?
I only find "files.eol": "\r\n" in settings, but this is global.
I understand you can change this in the bottom toolbar on a per-file basis, but I would like this to be the default for all .sh files.
In your settings window, go to
Settings > Text Editor > Files > Eol option. You'll fine following available options there
- \n
- \r\n
- auto (default)
Here \n represents LF, \r\n represents CRLF, and auto use the operating system specific EL operator.
Select your option and save.
VS Code: version 1.13.3
You can use EditorConfig.
Install the editorconfig extension, and then add a .editorconfig file at the root of your project with this:
[*]
end_of_line = crlf
[*.{sh}]
end_of_line = lf
But as #axiac said, I would recommend to always use lf...
EDIT : I was a bit "premature" with this answer. But it now works as of v1.40. See https://github.com/microsoft/vscode-docs/blob/vnext/release-notes/v1_40.md#fileseol-per-language
You can do this in vscode without an extension. You can make a language-specific setting:
In the command palette, search for "Configure language specific", select it and choose "shellscript" from the language options:
This will create the following in your settings:
"[shellscript]": {
},
Now add in whatever you want to apply to shellscript files only like (not all settings will work in there but most do):
"[shellscript]": {
"files.eol": "\n"
},
VERY, VERY IMPORTANT:
The end-of-line sequence is used for new files. For existing files,
the existing end-of-line sequence is always preserved. To change the
end-of-line sequence for an existing file, use the Change End Of Line
Sequence command.
from https://github.com/microsoft/vscode-docs/blob/vnext/release-notes/v1_40.md#fileseol-per-language

How to open ".z" file in emacs?

I have a plaintext file named ~/.z but when I open it with emacs --no-init-file, I get a blank buffer and a buffer named *jka-compr-error* which states
Error while executing "gzip -c -q -d < /home/username/.z"
gzip: stdin: not in gzip format
How can I open this file in emacs?
Use find-file-literally. It does no conversions and it does not use the auto-mode-alist to set the major mode: it uses fundamental-mode instead.
Turn off auto-compression-mode in your .emacs file: (auto-compression-mode 0), and try again.
.z is an unix compression file extension.
So probably you cant open it. But you can always extract it but i think extracting would not be the case here since you said thats a plaintext file.

EmacsW32 renames buffers with old Windows shortened file names

Let's see if I can reach the EmacsW32 users on stackoverflow.
I've just installed the patched version of EmacsW32 from http://ourcomments.org/Emacs/EmacsW32.html
I find it very nice that .txt files are associated wth Emacs, so that when you click on one, emacsclient opens it in the running instance of Emacs.
Problem is, for some reason, the buffer is renamed with the old-style shortened file names, so, for example, the buffer with file "activities-2008.txt" is renamed to "ACTIV~1.TXT", which I don't like.
How do I get EmacsW32 not to rename the buffer, and use the whole file name as the buffer name instead ?
Ick, that sucks.
Why not just use the emacsclientw that comes with the standard Windows emacs distribution?
It does have a bit of an issue in that you get an annoying "No error" error box if Emacs isn't already running, but any real emacs user starts emacs first thing when they log on anyway. :-)
Solved.
The problem is not with emacs, but with the way Windows runs a program when a file type is associated in the registry.
In my registry, I had this value for the keys that associate txt files with Emacs:
C:\emacs-23.0.91.1\Emacs\bin\emacsclientw.exe -n "%1"
The problem is the %1, which is replaced by a short file name.
According to this message http://lists.gnu.org/archive/html/help-emacs-windows/2009-05/msg00022.html:
%L is long file names.
%1 is long file names IF
* Explorer can find the exe file (it does not look very hard)
AND
* The file header says it is Win 95 aware Win16 exe, or
* It is a 32 bit program
Else %1 will be a short name.
The solution is to use %L in place of %1 in the reg keys.