VSCODE how to reopen or save file with ASCII encoding - visual-studio-code

I have text file that contains an SQL that queries a phrase in hebrew.
I wrote the SQL in VSCODE using utf-8 encoding.
Now i have to run this sql using a teradata utility called bteq.
this utility uses ascii encoding.
I tried to reopen or save the file in VSCODE with new encoding but couldn't find an ascii encoding.
In ultra-edit editor there is an ascii encoding and when i copy paste the sql to ultra-edit and save it with ascii encoding the sql runs successfully.
Is there any way to use VSCODE to save the file in ascii encoding?
Update: 2022-05-23
Following the comments:
The reason i need this is that i work with a teradata database.
The tool with which which the SQL is written is called "Teradata Assistant".
Teradata assistant version is: 16.20.0.9 2019-10-25
Database version is: 16.20.53.48
With this tool i can write the sql with hebrew letters and save it as a text file.
When i open the file with teradata assistant the file is displayed correctly with the hebrew letters.
However when i open the same file with VSCODE using UTF-8 all the hebrew letters are replaced with this symbol: �
Finally when i open the file with notepad with ANSI encoding the file is opened with the hebrew letters displayed correctly.
I find it odd that VSCODE can't do something that notepad can do..

Related

how to find the encoding type of configuration.properties files in powershell [duplicate]

This isn't really a programming question, is there a command line or Windows tool (Windows 7) to get the current encoding of a text file? Sure I can write a little C# app but I wanted to know if there is something already built in?
Open up your file using regular old vanilla Notepad that comes with Windows.
It will show you the encoding of the file when you click "Save As...".
It'll look like this:
Whatever the default-selected encoding is, that is what your current encoding is for the file.
If it is UTF-8, you can change it to ANSI and click save to change the encoding (or visa-versa).
I realize there are many different types of encoding, but this was all I needed when I was informed our export files were in UTF-8 and they required ANSI. It was a onetime export, so Notepad fit the bill for me.
FYI: From my understanding I think "Unicode" (as listed in Notepad) is a misnomer for UTF-16.
More here on Notepad's "Unicode" option: Windows 7 - UTF-8 and Unicdoe
If you have "git" or "Cygwin" on your Windows Machine, then go to the folder where your file is present and execute the command:
file *
This will give you the encoding details of all the files in that folder.
The (Linux) command-line tool 'file' is available on Windows via GnuWin32:
http://gnuwin32.sourceforge.net/packages/file.htm
If you have git installed, it's located in C:\Program Files\git\usr\bin.
Example:
C:\Users\SH\Downloads\SquareRoot>file *
_UpgradeReport_Files; directory
Debug; directory
duration.h; ASCII C++ program text, with CRLF line terminators
ipch; directory
main.cpp; ASCII C program text, with CRLF line terminators
Precision.txt; ASCII text, with CRLF line terminators
Release; directory
Speed.txt; ASCII text, with CRLF line terminators
SquareRoot.sdf; data
SquareRoot.sln; UTF-8 Unicode (with BOM) text, with CRLF line terminators
SquareRoot.sln.docstates.suo; PCX ver. 2.5 image data
SquareRoot.suo; CDF V2 Document, corrupt: Cannot read summary info
SquareRoot.vcproj; XML document text
SquareRoot.vcxproj; XML document text
SquareRoot.vcxproj.filters; XML document text
SquareRoot.vcxproj.user; XML document text
squarerootmethods.h; ASCII C program text, with CRLF line terminators
UpgradeLog.XML; XML document text
C:\Users\SH\Downloads\SquareRoot>file --mime-encoding *
_UpgradeReport_Files; binary
Debug; binary
duration.h; us-ascii
ipch; binary
main.cpp; us-ascii
Precision.txt; us-ascii
Release; binary
Speed.txt; us-ascii
SquareRoot.sdf; binary
SquareRoot.sln; utf-8
SquareRoot.sln.docstates.suo; binary
SquareRoot.suo; CDF V2 Document, corrupt: Cannot read summary infobinary
SquareRoot.vcproj; us-ascii
SquareRoot.vcxproj; utf-8
SquareRoot.vcxproj.filters; utf-8
SquareRoot.vcxproj.user; utf-8
squarerootmethods.h; us-ascii
UpgradeLog.XML; us-ascii
Another tool that I found useful: https://archive.codeplex.com/?p=encodingchecker
EXE can be found here
Install git ( on Windows you have to use git bash console). Type:
file --mime-encoding *
for all files in the current directory , or
file --mime-encoding */*
for the files in all subdirectories
Here's my take how to detect the Unicode family of text encodings via BOM. The accuracy of this method is low, as this method only works on text files (specifically Unicode files), and defaults to ascii when no BOM is present (like most text editors, the default would be UTF8 if you want to match the HTTP/web ecosystem).
Update 2018: I no longer recommend this method. I recommend using file.exe from GIT or *nix tools as recommended by #Sybren, and I show how to do that via PowerShell in a later answer.
# from https://gist.github.com/zommarin/1480974
function Get-FileEncoding($Path) {
$bytes = [byte[]](Get-Content $Path -Encoding byte -ReadCount 4 -TotalCount 4)
if(!$bytes) { return 'utf8' }
switch -regex ('{0:x2}{1:x2}{2:x2}{3:x2}' -f $bytes[0],$bytes[1],$bytes[2],$bytes[3]) {
'^efbbbf' { return 'utf8' }
'^2b2f76' { return 'utf7' }
'^fffe' { return 'unicode' }
'^feff' { return 'bigendianunicode' }
'^0000feff' { return 'utf32' }
default { return 'ascii' }
}
}
dir ~\Documents\WindowsPowershell -File |
select Name,#{Name='Encoding';Expression={Get-FileEncoding $_.FullName}} |
ft -AutoSize
Recommendation: This can work reasonably well if the dir, ls, or Get-ChildItem only checks known text files, and when you're only looking for "bad encodings" from a known list of tools. (i.e. SQL Management Studio defaults to UTF16, which broke GIT auto-cr-lf for Windows, which was the default for many years.)
A simple solution might be opening the file in Firefox.
Drag and drop the file into firefox
Press Ctrl+I to open the page info
and the text encoding will appear on the "Page Info" window.
Note: If the file is not in txt format, just rename it to txt and try again.
P.S. For more info see this article.
I wrote the #4 answer (at time of writing). But lately I have git installed on all my computers, so now I use #Sybren's solution. Here is a new answer that makes that solution handy from powershell (without putting all of git/usr/bin in the PATH, which is too much clutter for me).
Add this to your profile.ps1:
$global:gitbin = 'C:\Program Files\Git\usr\bin'
Set-Alias file.exe $gitbin\file.exe
And used like: file.exe --mime-encoding *. You must include .exe in the command for PS alias to work.
But if you don't customize your PowerShell profile.ps1 I suggest you start with mine: https://gist.github.com/yzorg/8215221/8e38fd722a3dfc526bbe4668d1f3b08eb7c08be0
and save it to ~\Documents\WindowsPowerShell. It's safe to use on a computer without git, but will write warnings when git is not found.
The .exe in the command is also how I use C:\WINDOWS\system32\where.exe from powershell; and many other OS CLI commands that are "hidden by default" by powershell, *shrug*.
you can simply check that by opening your git bash on the file location then running the command file -i file_name
example
user filesData
$ file -i data.csv
data.csv: text/csv; charset=utf-8
Some C code here for reliable ascii, bom's, and utf8 detection: https://unicodebook.readthedocs.io/guess_encoding.html
Only ASCII, UTF-8 and encodings using a BOM (UTF-7 with BOM, UTF-8 with BOM,
UTF-16, and UTF-32) have reliable algorithms to get the encoding of a document.
For all other encodings, you have to trust heuristics based on statistics.
EDIT:
A powershell version of a C# answer from: Effective way to find any file's Encoding. Only works with signatures (boms).
# get-encoding.ps1
param([Parameter(ValueFromPipeline=$True)] $filename)
begin {
# set .net current directoy
[Environment]::CurrentDirectory = (pwd).path
}
process {
$reader = [System.IO.StreamReader]::new($filename,
[System.Text.Encoding]::default,$true)
$peek = $reader.Peek()
$encoding = $reader.currentencoding
$reader.close()
[pscustomobject]#{Name=split-path $filename -leaf
BodyName=$encoding.BodyName
EncodingName=$encoding.EncodingName}
}
.\get-encoding chinese8.txt
Name BodyName EncodingName
---- -------- ------------
chinese8.txt utf-8 Unicode (UTF-8)
get-childitem -file | .\get-encoding
Looking for a Node.js/npm solution? Try encoding-checker:
npm install -g encoding-checker
Usage
Usage: encoding-checker [-p pattern] [-i encoding] [-v]
Options:
--help Show help [boolean]
--version Show version number [boolean]
--pattern, -p, -d [default: "*"]
--ignore-encoding, -i [default: ""]
--verbose, -v [default: false]
Examples
Get encoding of all files in current directory:
encoding-checker
Return encoding of all md files in current directory:
encoding-checker -p "*.md"
Get encoding of all files in current directory and its subfolders (will take quite some time for huge folders; seemingly unresponsive):
encoding-checker -p "**"
For more examples refer to the npm docu or the official repository.
Similar to the solution listed above with Notepad, you can also open the file in Visual Studio, if you're using that. In Visual Studio, you can select "File > Advanced Save Options..."
The "Encoding:" combo box will tell you specifically which encoding is currently being used for the file. It has a lot more text encodings listed in there than Notepad does, so it's useful when dealing with various files from around the world and whatever else.
Just like Notepad, you can also change the encoding from the list of options there, and then saving the file after hitting "OK". You can also select the encoding you want through the "Save with Encoding..." option in the Save As dialog (by clicking the arrow next to the Save button).
The only way that I have found to do this is VIM or Notepad++.
EncodingChecker
File Encoding Checker is a GUI tool that allows you to validate the text encoding of one or more files. The tool can display the encoding for all selected files, or only the files that do not have the encodings you specify.
File Encoding Checker requires .NET 4 or above to run.

Is there a way to make VS Code not replace unknown text characters?

I'm currently using VS code to write a PowerShell script. As part of this script REGEX is used to replace/remove an atypical character that ends up in the data fairly often and causes trouble down the line. The character is (U+2019) and when the script is opened in code it is replaced permanently with (U+FFFD)
thus the line:
$user.Name = $user.Name -Replace "'|\’|\(|\)|\s+",""
Permanently becomes: $user.Name = $user.Name -Replace "'|\�|\(|\)|\s+",""
until it is manually changed. Seeing as I can paste the U+2019 character in once the file is open and then run the code, I assume that VS code can interpret it okay and the problem is with loading the file in. Is there some option that I can set to stop this being replaced when I open the file?
In my case, turning on the VS Code setting, "Files: Auto Guess Encoding," has fixed the problem, both for reading and saving.
This looks like it all comes down to encoding. Visual Studio Code by default uses UTF-8 and can in general handle saving/viewing Unicode properly.
If the issue is on Opening the file, then is is a case where Visual Studio Code is misinterpreting the file encoding on Opening the file. You can change the encoding (Configuring VS Code encoding) via settings in VS Code for file specific encoding (e.g. UTF-8, UTF-8BOM, UTF-16LE,etc.) by changing the "files.encoding" setting.
"files.encoding": "utf8bom"
If the issue is on saving the file, then it is being saved as ASCII(aka. Windows-1252) and not as proper UTF-8 or equivalent. On save, the character is replaced with the Replacement Character (U+FFFD) which would be displayed on the next time it is opened.
Note: The default encoding used for Windows PowerShell v5.1 is Windows-1252, and may be why saving the scripts with special characters may not work. PowerShell Core v6+ uses UTF-8 by default.
If I save in Vscode as Windows 1252 encoding, I see the character "’" change to � on next opening. I think the problem is Vscode doesn't recognize Windows 1252. It opens it as UTF8. If you reopen with the Windows 1252 encoding, it displays correctly. The other encodings work fine, even to display the character. This includes utf8 no bom.
Even Powershell 5 doesn't have this problem with Windows 1252, only Vscode. Set-content and get-content in Powershell 5 default to Windows 1252.
"’" | set-content file
get-content file
’
Powershell 7 would actually have the same problem:
get-content file
�

Does pg_dump preserve all Unicode characters when .sql file is ANSI?

I use
pg_dump.exe -U postgres -f "file-name.sql" database-name
to backup UTF-8 encoded databases on PostgreSQL 8.4 and 9.5, Windows host. Some may have foreign characters such as Chinese, Thai etc stored in Characters columns.
The resulting .sql file shows ANSI encoding when opening in Notepad++ (I'm NOT applying ANSI to opened files by default). How do I know if Unicode characters are always preserved in the dump file? Should I be using an archive (object) backup file instead?
Quote from the manual
By default, the dump is created in the database encoding.
There is no difference in a text file in ANSI encoding and UTF-8 if no extended characters are used. Maybe your dump has no special characters and thus the editor doesn't identify it as UTF-8.
If you want the SQL dump in a specific encoding, use the --encoding=encoding parameter or the PGCLIENTENCODING environment variable

How to use postgresql copy command with ANSI encoding?

I use PostgreSQL copy command to make a csv file.
COPY (select * from table) from '/tmp/123456.csv' with csv header encoding 'UTF8';
this command is ok
But I want to make file with ANSI encoding, because I want to open it in Microsoft Excel
COPY (select * from table) from '/tmp/123456.csv' with csv header encoding 'ANSI';
this command can't make csv file
Since you seem to be on Windows, maybe you want to use one of the Windows encodings, for example WIN1252 for western European languages.

Oracle -Character Encoding

I am facing a peculiar problem where i need to update a particular value in database to say 'Hellò'. When i run normal update statement the values are updated fine. But when i put it i a .sql script file and then run the update statement the last character gets replaced by a junk value. Can some one enlighten me on this and how oracle processes script files?
If the update statement works then this isn't an issue with the character encoding in the database so there are two main culprits to look at; your software and your file encoding.
Check that you editor is UTF8 compliant - Notepad for example is not, but Wordpad is, and so are better editors like UltraEdit. You also need to check that the file is saved as UTF8 since this in not always the default and if you edited and saved a file with Notepad, for example, it won't be UTF8 any more.
Once you have a UTF8 file then you must load it into the database via software that supports UTF8, which excludes SQLPlus in Windows 10g. SQL Developer is OK. If you want to use SQLPLus upgrade your client to 11g. You don't say how you are loading it, so check that whatever you are using supported UTF8.