That's it . I want to open all the files in once (to make replacements). But notepad++ opens them all in ANSI and I have to go through each file and change the encoding to UTF-8.
In Notepad++ 5.9 there is an option in Settings -> Preferences -> New Document/Default Directory to automatically convert ANSI files to UTF-8 without BOM on open
Settings -> Preferences -> New Document/Default Directory -> UTF-8
I believe you can change this in config.model.xml. I'm not sure if it is setting an unknown type as the default type, however, but it might be worth altering NewDocDefaultSettings
<GUIConfig name="NewDocDefaultSettings" format="0" encoding="0" lang="0" />
to
<GUIConfig name="NewDocDefaultSettings" format="0" encoding="4" lang="0" />
I think at worse case you might be able to record a Macro to automate this process.
Late but for new Users,i had faced a similar problem
Step 1:
Settings -> Preferences -> New Document ->[Encodng] UTF-8 without BOM (check Apply to opened ANSI files)
but any how it hadn't solved mine , i searched out and found that its a bug!
Notepad++ v6.4.5 bug fixes:
Fix a crash issue while there's missing tag in functionList.xml. Fix
UTF-8 (w/o BOM) detection bug.
Step 2:
That's all have an update to >= v6.4.5 and follow Step 1
look here to change default UTF8 settings for notepad ( new text documents ) - http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_programs/default-utf-8-encoding-for-new-notepad-documents/525f0ae7-121e-4eac-a6c2-cfe6b498712c
Related
I am using Visual Studio Code to write a LaTeX file with the LaTeX Workspace plugin.
However everytime that I open VS Code it insists that the encoding of the TeX file is UTF-8 and makes all the special characters go bezerk, but for some reason TeX Live doesn't compile in that encoding even if I convert it. Since another person is using the file too and their editor is set in Windows 1252 encoding, I want to keep using that.
How to set a encoding to a file permantly (or to an extension) in VS Code?
There are language-specific configurations. CTRL-Shift-P and see "Preferences: Configure Language Specific Settings... However, I do not see a LaTex choice there. But you may because of the LaText Plugin. So you could do something like:
{
"[latex]": {
"files.encoding": "windows1252"
}
}
If you don't see one perhaps you could associate your file extension (.tex?) with one on the list and then the above setting?
I assume you have
{
"files.autoGuessEncoding": false
}
already set to false - the default. WTH, try "true".
And see Allow to set files.encoding as language specific setting for files on startup so the lanuage-specific setting should work better on start-up.
Your settings.json per user or per workspace can contain an encoding directive.
If you want Java files opened in UTF-8,
then the following has no effect
"files.encoding" : "utf8",
but this works
"[java]": {
"files.encoding": "utf8"
}
The existing answers show a possible solution for single files or file types. However, you can define the charset standard in VS Code by following this path:
File > Preferences > Settings > Encoding > Choose your option
This will define a character set as default.
VSCode set default file encoding
Sven Eschlbeck's answer illustrated:
The following page will be opened. There are many settings. To get to the desired item without scrolling through all entries, type "Encod" in the search box. Observe that the item "Files: Encoding" is presented to us. Now we can change the setting.
Tips to share with you: "GB18030" applies fairly well universally for source code files containing Chinese characters.
More tips:
The encoding being applied to the current file is shown in the status bar. Mouse right-click this to call up the options as shown.
Here, you can switch encoding ad-hoc.
Having autoGuessEncoding true in USER and autoGuessEncoding false, "files.encoding": "windows1250" in WorkSpace was still giving me windows1252.
I do not uderstand why User overchanged WorkSpace. I have to disable autoGuessEncoding also in USER to finally get "files.encoding": "windows1250" work everytime.
So you can face the same issue and this could help.
I have a problem setting encoding in DBeaver, specifically
SQL file encoding (not db encoding!)
to cp1250 (Windows-1250)
No problem to set such encoding for new files/scripts - right click on Scripts folder (or the whole project folder, from which scripts are inheriting):
Properties / Resources / Text file encoding.
The problem is opening existing scripts from text files (SQL Editor / Load script). These are loaded in UTF-8, which is shown as deafult encoding. No easy way to change the encoding after opening...
Does anyone know how to change default encoding for DBeaver (or for opened files)?
I cannot find the answer anywhere...
see capture file below for a detailed explanation.
Hope this helps (if still relevant)...
Korean(한글) users who wish to set encoding to UTF-8, follow steps marked on the below figure.
I had the same problem and solved removing the UTF encoding from the driver settings.
Right click on Data Base > Edit Connection F4 > click on button ‘Edit Driver Settings’ > click on Tab ‘Driver Properties’ > Right click on ‘encoding UTF8’ > Remove Property> OK>
click on Tab ‘Driver Properties’ > Right click on ‘encoding UTF8’ > Remove Property> OK
for a long time, I tried to set up XeLaTeX to work with UTF-8 correct. Silly me!
A couple of days ago I have figured out, that XeLaTeX is working correct. When I edit or create a file using vi for example, everything works like a charme.
But when I am editing or creating a LaTeX file using TeXShop it tranforms german umlauts from
...
\begin{document}
\maketitle
üöäßÜÖÄ
...
into
\UTF{00FC}\UTF{00F6}\UTF{00E4}\UTF{00DF}\UTF{00D6}\UTF{00C4}\UTF{00DC}
As a result of the xelatex .tex I receive a
...
<name>.tex:<line number>: Undefined control sequence.
l<line number>\UTF
{00FC}\UTF{00F6}\UTF{00E4}\UTF{00DF}\UTF{00D6}\UTF{00C4}\UTF{00DC}%
Where is a filename and is the position. It is not relevant!
I have set up to save the file as Unicode(UTF-8) in the TeXShop Preferences (TeXShop ->Preferences...->Source) but it does not help.
On my other computer I have the same version installed. On this one it is saving the document correct.
Even reinstalling did not help.
The file is saved using the strange \UTF{...} notation instead of the character.
How can I fix the encoding when saving?
Thank you in advance
Alexander
Because I've the same problem and searched a lot:
In the menu go to:
TeXShop -> Preferences... -> Misc -> pTeX support -> utf export.
Uncheck this checkbox!
After spending 2 hours making experiments, I found your answer. Thanks a lot. I could fix my own environment. However, in addition to
TeXShop -> Preferences... -> Misc -> pTeX support -> utf export
I also had to uncheck
TeXShop -> Preferences... -> Misc -> Automatic conversion from UTF-8-Mac into UTF-8
(not sure about the exact English message, I work under another language). I thought it could be useful for other people.
This is a simple question :
Is there any automatic indentation alignment on Eclipse PDT when you do a copy-paste ?
Because it doesn't work on my installation (raw copy-paste), but maybe this is because I use tabs instead of spaces, so I want to know if it will work if I switch to spaces.
Thanks
Actually, such a function exists now for eclipse CDT. In preferences : C++ -> Editor -> Typing, "when pasting", "adjust indentation".
Perhaps it is the same for PDT ?
I don't know specifically about PDT, but normally you can indent code with Ctrl + i, and format code with Ctrl + Shift + f (under Preferences -> Java -> Code Style -> Formatter you can specify how the formatter should work).
You can even select multiple files in the navigator and select Source -> Format to format several files then.
Tony Vermeiren had a suggestion that worked for me
(from https://bugs.eclipse.org/bugs/show_bug.cgi?id=351771#c17)
open workspace/.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.php.ui.prefs
add "smartPaste=false" at the end
save & restart eclipse
A similar idea to Davids is the use of Save actions: in the Preferences selecting Java/Editor/Save actions you can define an action to format the java code files during saves. It would solve the paste format problems during the save.
If you manage codes with different coding standards, where this formatting might have problems, define multiple formats, on a per-project basis (using the Project properties window).
Ok then I guess the answer is : no, this functionnality doesn't exists yet. I've moved to PhpStorm and I'm more than happy.
TLDR: uncheck "C/C++->Editor->Typing->Adjust indentation"
Explanation:
When i copy paste the following:
And "C/C++->Editor->Typing->Adjust indentation" checked
The result is that the second line has unwanted indentation:
To fix this just uncheck "C/C++->Editor->Typing->Adjust indentation"
I have a bizarre problem: Somewhere in my HTML/PHP code there's a hidden, invisible character that I can't seem to get rid of. By copying it from Firebug and converting it I identified it as or 'Zero width no-break space'. It shows up as non-empty text node in my website and is causing a serious layout problem.
The problem is, I can't get rid of it. I can't see it in my files even when turning Invisibles on (duh). I can't seem to find it, no search tool seems to pick up on it. I rewrote my code around where it could be, but it seems to be somewhere deeper in one of the framework files.
How can I find characters by charcode across files or something like that? I'm open to different tools, but they have to work on Mac OS X.
You don't get the character in the editor, because you can't find it in text editors. #FEFF or #FFFE are so-called byte-order marks. They are a Microsoft invention to tell in a Unicode file, in which order multi-byte characters are stored.
To get rid of it, tell your editor to save the file either as ANSI/ISO-8859 or as Unicode without BOM. If your editor can't do so, you'll either have to switch editors (sadly) or use some kind of truncation tool like, e.g., a hex editor that allows you to see how the file really looks.
On googling, it seems, that TextWrangler has a "UTF-8, no BOM" mode. Otherwise, if you're comfortable with the terminal, you can use Vim:
:set nobomb
and save the file. Presto!
The characters are always the very first in a text file. Editors with support for the BOM will not, as I mentioned, show it to you at all.
If you are using Textmate and the problem is in a UTF-8 file:
Open the file
File > Re-open with encoding > ISO-8859-1 (Latin1)
You should be able to see and remove the first character in file
File > Save
File > Re-open with encoding > UTF8
File > Save
It works for me every time.
It's a byte-order mark. Under Mac OS X: open terminal window, go to your sources and type:
grep -rn $'\xFEFF' *
It will show you the line numbers and filenames containing BOM.
In Notepad++, there is an option to show all characters. From the top menu:
View -> Show Symbol -> Show All Characters
I'm not a Mac user, but my general advice would be: when all else fails, use a hex editor. Very useful in such cases.
See "Comparison of hex editors" in WikiPedia.
I know it is a little late to answer to this question, but I am adding how to change encoding in Visual Studio, hope it will be helpfull for someone who will be reading this sometime:
Go to File -> Save (your filename) as...
And in File Explorer window, select small arrow next to the Save button -> click Save with Encoding...
Click Yes (on Do you want to replace existing file dialog)
And finally select e.g. Unicode (UTF-8 without signature) - that removes BOM