When I run babel --plugins transform-react-jsx like_button.jsx > like_button.js the resulting like_button.js is utf-16 encoded (and like_button.jsx has some 8 bit encoding, probably utf-8).
How can I make bable write like_button.js utf-8 encoded?
Babel's output is definitely UTF-8. Since you are seeing UTF-16 in your file, and the file is being written by your terminal, it seems most likely that your terminal is re-encoding the data before writing it to a file.
The easiest option for you would be to change from
-babel --plugins transform-react-jsx like_button.jsx > like_button.js
+babel --plugins transform-react-jsx like_button.jsx --out-file like_button.js
so that Babel itself is responsible for writing the output to the file, which removes the terminal from the equation.
If you don't want to do that, you'll need to look into your terminal options to see if there is an explicit encoding set somewhere.
This isn't really a programming question, is there a command line or Windows tool (Windows 7) to get the current encoding of a text file? Sure I can write a little C# app but I wanted to know if there is something already built in?
Open up your file using regular old vanilla Notepad that comes with Windows.
It will show you the encoding of the file when you click "Save As...".
It'll look like this:
Whatever the default-selected encoding is, that is what your current encoding is for the file.
If it is UTF-8, you can change it to ANSI and click save to change the encoding (or visa-versa).
I realize there are many different types of encoding, but this was all I needed when I was informed our export files were in UTF-8 and they required ANSI. It was a onetime export, so Notepad fit the bill for me.
FYI: From my understanding I think "Unicode" (as listed in Notepad) is a misnomer for UTF-16.
More here on Notepad's "Unicode" option: Windows 7 - UTF-8 and Unicdoe
If you have "git" or "Cygwin" on your Windows Machine, then go to the folder where your file is present and execute the command:
file *
This will give you the encoding details of all the files in that folder.
The (Linux) command-line tool 'file' is available on Windows via GnuWin32:
http://gnuwin32.sourceforge.net/packages/file.htm
If you have git installed, it's located in C:\Program Files\git\usr\bin.
Example:
C:\Users\SH\Downloads\SquareRoot>file *
_UpgradeReport_Files; directory
Debug; directory
duration.h; ASCII C++ program text, with CRLF line terminators
ipch; directory
main.cpp; ASCII C program text, with CRLF line terminators
Precision.txt; ASCII text, with CRLF line terminators
Release; directory
Speed.txt; ASCII text, with CRLF line terminators
SquareRoot.sdf; data
SquareRoot.sln; UTF-8 Unicode (with BOM) text, with CRLF line terminators
SquareRoot.sln.docstates.suo; PCX ver. 2.5 image data
SquareRoot.suo; CDF V2 Document, corrupt: Cannot read summary info
SquareRoot.vcproj; XML document text
SquareRoot.vcxproj; XML document text
SquareRoot.vcxproj.filters; XML document text
SquareRoot.vcxproj.user; XML document text
squarerootmethods.h; ASCII C program text, with CRLF line terminators
UpgradeLog.XML; XML document text
C:\Users\SH\Downloads\SquareRoot>file --mime-encoding *
_UpgradeReport_Files; binary
Debug; binary
duration.h; us-ascii
ipch; binary
main.cpp; us-ascii
Precision.txt; us-ascii
Release; binary
Speed.txt; us-ascii
SquareRoot.sdf; binary
SquareRoot.sln; utf-8
SquareRoot.sln.docstates.suo; binary
SquareRoot.suo; CDF V2 Document, corrupt: Cannot read summary infobinary
SquareRoot.vcproj; us-ascii
SquareRoot.vcxproj; utf-8
SquareRoot.vcxproj.filters; utf-8
SquareRoot.vcxproj.user; utf-8
squarerootmethods.h; us-ascii
UpgradeLog.XML; us-ascii
Another tool that I found useful: https://archive.codeplex.com/?p=encodingchecker
EXE can be found here
Install git ( on Windows you have to use git bash console). Type:
file --mime-encoding *
for all files in the current directory , or
file --mime-encoding */*
for the files in all subdirectories
Here's my take how to detect the Unicode family of text encodings via BOM. The accuracy of this method is low, as this method only works on text files (specifically Unicode files), and defaults to ascii when no BOM is present (like most text editors, the default would be UTF8 if you want to match the HTTP/web ecosystem).
Update 2018: I no longer recommend this method. I recommend using file.exe from GIT or *nix tools as recommended by #Sybren, and I show how to do that via PowerShell in a later answer.
# from https://gist.github.com/zommarin/1480974
function Get-FileEncoding($Path) {
$bytes = [byte[]](Get-Content $Path -Encoding byte -ReadCount 4 -TotalCount 4)
if(!$bytes) { return 'utf8' }
switch -regex ('{0:x2}{1:x2}{2:x2}{3:x2}' -f $bytes[0],$bytes[1],$bytes[2],$bytes[3]) {
'^efbbbf' { return 'utf8' }
'^2b2f76' { return 'utf7' }
'^fffe' { return 'unicode' }
'^feff' { return 'bigendianunicode' }
'^0000feff' { return 'utf32' }
default { return 'ascii' }
}
}
dir ~\Documents\WindowsPowershell -File |
select Name,#{Name='Encoding';Expression={Get-FileEncoding $_.FullName}} |
ft -AutoSize
Recommendation: This can work reasonably well if the dir, ls, or Get-ChildItem only checks known text files, and when you're only looking for "bad encodings" from a known list of tools. (i.e. SQL Management Studio defaults to UTF16, which broke GIT auto-cr-lf for Windows, which was the default for many years.)
A simple solution might be opening the file in Firefox.
Drag and drop the file into firefox
Press Ctrl+I to open the page info
and the text encoding will appear on the "Page Info" window.
Note: If the file is not in txt format, just rename it to txt and try again.
P.S. For more info see this article.
I wrote the #4 answer (at time of writing). But lately I have git installed on all my computers, so now I use #Sybren's solution. Here is a new answer that makes that solution handy from powershell (without putting all of git/usr/bin in the PATH, which is too much clutter for me).
Add this to your profile.ps1:
$global:gitbin = 'C:\Program Files\Git\usr\bin'
Set-Alias file.exe $gitbin\file.exe
And used like: file.exe --mime-encoding *. You must include .exe in the command for PS alias to work.
But if you don't customize your PowerShell profile.ps1 I suggest you start with mine: https://gist.github.com/yzorg/8215221/8e38fd722a3dfc526bbe4668d1f3b08eb7c08be0
and save it to ~\Documents\WindowsPowerShell. It's safe to use on a computer without git, but will write warnings when git is not found.
The .exe in the command is also how I use C:\WINDOWS\system32\where.exe from powershell; and many other OS CLI commands that are "hidden by default" by powershell, *shrug*.
you can simply check that by opening your git bash on the file location then running the command file -i file_name
example
user filesData
$ file -i data.csv
data.csv: text/csv; charset=utf-8
Some C code here for reliable ascii, bom's, and utf8 detection: https://unicodebook.readthedocs.io/guess_encoding.html
Only ASCII, UTF-8 and encodings using a BOM (UTF-7 with BOM, UTF-8 with BOM,
UTF-16, and UTF-32) have reliable algorithms to get the encoding of a document.
For all other encodings, you have to trust heuristics based on statistics.
EDIT:
A powershell version of a C# answer from: Effective way to find any file's Encoding. Only works with signatures (boms).
# get-encoding.ps1
param([Parameter(ValueFromPipeline=$True)] $filename)
begin {
# set .net current directoy
[Environment]::CurrentDirectory = (pwd).path
}
process {
$reader = [System.IO.StreamReader]::new($filename,
[System.Text.Encoding]::default,$true)
$peek = $reader.Peek()
$encoding = $reader.currentencoding
$reader.close()
[pscustomobject]#{Name=split-path $filename -leaf
BodyName=$encoding.BodyName
EncodingName=$encoding.EncodingName}
}
.\get-encoding chinese8.txt
Name BodyName EncodingName
---- -------- ------------
chinese8.txt utf-8 Unicode (UTF-8)
get-childitem -file | .\get-encoding
Looking for a Node.js/npm solution? Try encoding-checker:
npm install -g encoding-checker
Usage
Usage: encoding-checker [-p pattern] [-i encoding] [-v]
Options:
--help Show help [boolean]
--version Show version number [boolean]
--pattern, -p, -d [default: "*"]
--ignore-encoding, -i [default: ""]
--verbose, -v [default: false]
Examples
Get encoding of all files in current directory:
encoding-checker
Return encoding of all md files in current directory:
encoding-checker -p "*.md"
Get encoding of all files in current directory and its subfolders (will take quite some time for huge folders; seemingly unresponsive):
encoding-checker -p "**"
For more examples refer to the npm docu or the official repository.
Similar to the solution listed above with Notepad, you can also open the file in Visual Studio, if you're using that. In Visual Studio, you can select "File > Advanced Save Options..."
The "Encoding:" combo box will tell you specifically which encoding is currently being used for the file. It has a lot more text encodings listed in there than Notepad does, so it's useful when dealing with various files from around the world and whatever else.
Just like Notepad, you can also change the encoding from the list of options there, and then saving the file after hitting "OK". You can also select the encoding you want through the "Save with Encoding..." option in the Save As dialog (by clicking the arrow next to the Save button).
The only way that I have found to do this is VIM or Notepad++.
EncodingChecker
File Encoding Checker is a GUI tool that allows you to validate the text encoding of one or more files. The tool can display the encoding for all selected files, or only the files that do not have the encodings you specify.
File Encoding Checker requires .NET 4 or above to run.
I am using PDFtk (Version 2.02, UNIX) for merging PDF and facing below problems in the output PDF:
Initial View of the PDF is changed (should open with Bookmarks Panel and Page)
Bookmarks doesn’t point to the exact linked section as in the separate PDFs (shows fit page of the section)
Original metadata is lost (should retain first PDF's metadata)
Please suggest any workaround for the above points.
Regards,
Umesh
It's a little late to answer, but I came across this question while looking for a solution to the same problem. After taking a look at the man of pdftk I found a solution and I made a little script:
#!/usr/bin/env bash
# pdfcat
array=( $# )
len=${#array[#]}
merged=${array[$len-1]}
pdf2merge=${array[#]:0:$len-1}
pdftk $1 dump_data output metadata
pdftk $pdf2merge cat output $merged
pdftk $merged update_info metadata output out
mv out $merged
rm metadata
exiftool $merged
The script save the metadata of the first PDF file (first argument) and write it to a file called metadata. Then it uses the cat command of pdftk to merge all the files (the output file is the last argument). Finally it loads metadata's content to the metadata of the resulting file before erasing metadata. The last line uses exiftoolto print the metadata of the resulting file in order to check if everything went well.
You can save this script to your home/username/bin directory, make it executable with:
$ chmod u+x scriptname
and then you can use it to merge files with the following syntax:
$ scriptname 1.pdf 2.pdf 3.pdf output.pdf
The resulting output.pdf will have the same metadata as the original 1.pdf file.
I'm getting this error when I made pandoc --filter pandoc-citeproc myfile.markdown myfile.pdf
pandoc: Cannot decode byte '\xd0': Data.Text.Encoding.Fusion.streamUtf8: Invalid UTF-8 stream
I have searched here and here, but I have double checked from the text editor and my file is UTF-8 encoded. It has accented Spanish characters, but the same command worked without anyproblem in the past. Any pointers to a solution would be appreciated.
My bad. The problem is related with the command I use to tell pandoc to create the pdf ouput. The proper command should be:
pandoc --filter pandoc-citeproc myfile.markdown -o myfile.pdf
note the -o flag between the input markdown file and the ouput pdf file. That's why I got the same utf-8 message that the people trying to convert from pdf to other formats documented in my links.
Check JabRef encoding
In my case, I bumped into a similar error when converting Pandoc Markdown to XHTML. The culprit was a set of BibTeX citations which JabRef had encoded by default in ISO8859_1.
This default JabRef behaviour can be changed once and for all by setting Default encoding: to UTF8 in JabRef's Options > Preferences > General menu.
I have a Perl source file in utf-8 encoding, LF ending. It contains English and Chinese characters. The questions are:
1.When I open file, the encoding is windows-1251-unix. I have to run these commands:
Alt-x revert-buffer-with-coding-system
> Coding system for visited file (default nil):
utf-8-auto-unix
> Revert buffer from file file_name.pl?
y
How to automatically open it in utf-8-auto-unix?
2.When I edit the file and try to save it, Emacs gives me a question:
> Select coding system (default raw-text):
utf-8-auto-unix
How to automatically save the file in utf-8-auto-unix? And get rid of the question.
You could add this comment to the top of the file:
# -*- coding: utf-8 -*-
Use describe-variable(C-h v) to examine the variable current-language-environment; follow the customize link and set it to "UTF-8".