Is there a NSIS unicode plug-in for symmetric encryption/decryption available?
I tried Blowfish and NSISCrypt with Unicode true option.
Blowfish doesn't seem to give any output whatsoever, and NSISCrypt gives some strange (I believe Chinese) characters, and can't even get decryption part to work without unicode enabled.
Any advice?
You need the Unicode versions of the plugins when you are using Unicode true. If the plugin does not have a Unicode version then you should ask the plugin author to generate one.
It is also possible to call Ansi plugins from Unicode NSIS if you use the CallAnsiPlugin plug-in:
Section
InitPluginsDir ;make sure we have $pluginsdir
File "/ONAME=$pluginsdir\NsisCrypt.dll" "${NSISDIR}\Plugins\x86-ansi\NsisCrypt.dll" ;you must extract the Ansi plugin manually
CallAnsiPlugin::Call "$pluginsdir\NsisCrypt" Hash 2 "Test string" "md5" ; The CallAnsiPlugin::Call parameters are: Dll Function ParameterCount Parameter1..N
Pop $1
DetailPrint MD5=$1
CallAnsiPlugin::Call "$pluginsdir\NsisCrypt" EncryptSymmetric 4 "test string" "3des" "doq5Eh/wmT6vWoVVyRpdPhMD9KNsWa0G" "EkjR1hOing8="
Pop $1
DetailPrint 3DES=$1
CallAnsiPlugin::Call "$pluginsdir\NsisCrypt" DecryptSymmetric 4 "$1" "3des" "doq5Eh/wmT6vWoVVyRpdPhMD9KNsWa0G" "EkjR1hOing8="
Pop $1
DetailPrint PlainText=$1
SectionEnd
Related
I am using MIT scheme, and would like to be able to do something like this:
(define π 3.14159265)
Without having an encoding error like this:
;Illegal character: #\U+80
;To continue, call RESTART with an option number:
; (RESTART 1) => Return to read-eval-print level 1
MIT Scheme does have Unicode support, but it appears that it doesn't have support for unicode in the code, which is what I am looking to do. It turns out that ISO-8859-1 (the encoding used in MIT Scheme) does not have any greek letters within it, which is a pity.
Solutions that might work, but are not very good:
Writing all of my code into text files and using the built in unicode support to read in the unicode characters as code.
Rewriting the entire interpreter to accept unicode names
Using a different lisp implementation which allows for Unicode names.
Can't wait to hear from the Stack Overflowers!
You can use unicode symbols in guile, gambit, scm, and chicken for sure.
I want to add a Trade Mark superscript in the NSI script.
I tried using unicode character for trademark - U+2122 , but it doesn't display the trade mark character correctly when the installer exe is run?
I have following questions:
How do I add the trademark symbol in the NSI
I am using NSI compiler version 2.46 . Do I need to upgrade?
How to create (enable) unicode support in a NSI file?
Source files in NSIS 2 are just a bunch of bytes and these bytes are stored directly in the .exe. At run-time Windows will (on NT based systems) convert these bytes to Unicode strings by using the current codepage/system locale (Language for non-unicode programs). This means that you have to use the correct codepage/encoding in your text editor. If your installer supports multiple languages you need to use LangString and basically edit those strings with the correct encoding set in your editor. Using a .nsh for each language might help.
NSIS 3 uses Unicode internally in the compiler and if you are creating a Unicode installer (Unicode True) then you can use any Unicode code point. You can save the .nsi as UTF-8 or UTF-16 (with BOM) or you can use the ${U+hexnumber} syntax:
Unicode True
Section
MessageBox mb_ok "Hello World${U+2122}"
SectionEnd
NSIS 3 can also generate Ansi installers and it knows about the ${U+hexnumber} syntax but it cannot guarantee that the codepoint will display correctly on the end-users system, it is still limited to simple bytes and will convert from Unicode to Ansi using the current codepage from the system you are compiling on.
You can try to use the character 0x99, the Windows 1252 equivalent of U+2122.
With a western configured windows, you can enter it directly via the keyboard with Alt0153 (keep the Alt key pressed while entering the digits on the numeric keypad, and it is Alt and not Alt Gr).
There are a lot of programmers editors that claim to support unicode / utf-8. I've tried a number of them (UltraEdit, jedit, emedit) but none of them tell you how to actually enter unicode characters into a file. Some of them tell you how to change the default file encoding to utf-8 or how to select a font that has good support for utf-8, but not how to enter utf-8 into a file using their editor.
The Go language (and some others) support utf-8 and I like the idea of using the actual utf-8 symbols for variables instead of variables with names like omega. I haven't found a programmers editor yet that actually allows you to do this, though.
The only editor / word processor that I've found that lets you how to enter unicode is Microsoft Word. Type the unicode and Alt+X and Word converts it. To get the Greek letter omega type "03c9" followed by Alt+X. UltraEdit will let you copy utf-8 from a web page into it, but their docs don't say how to actually enter utf-8 in a file, and their tech. support people don't know either.
This should be simple, but seems to be completely undocumented. Is there some key combination convention the lets you enter unicode into these editors that supposedly support unicode the way that Ctrl-F is widely used for search?
Thanks.
The standard programmer’s editor vim(1) supports limited Unicode input even if your operating system should be too broken to do so (are there any such, still?).
Just enter ^VuXXXX, where XXXX represents exactly four hex digits.
That will allow you to enter the ~6% of Unicode allocated to the Basic Multilingual Plane. The rest are forbidden to you.
This may be fixed in a newer release.
Otherwise, just use your mouse.
A few techniques I use if an editor is lacking:
Use the Windows charmap.exe utility to select characters and paste into a document.
Install an input method editor (IME) to write in a particular language.
Windows ALT keycodes.
Better to set your keyboard to generate Unicode characters across all Windows applications than to rely on a single application's custom input feature IMO.
Use the EnableHexNumpad feature and you can type any character in the Basic Multilingual Plane using Alt+numbad-plus,hexcode. (May not be of much use on a laptop without a numpad though.)
Or if there are particular characters you want to type a lot, find a keyboard layout that allows you to type them directly. For example eurokb might cover it, or you can make your own with MSKLC.
Old question, but you can type a lot of unicode in GNU Emacs or Vim
GNU Emacs: M-x set-input-method RET tex (or C-x RET C-\ tex) will let you type \omega to generate ω
Vim: Vim digraphs can generate unicode; C-k w * in insert mode gives you ω.
deceze hit the nail on the head. (S)he just didn't elaborate. bobince gave a bit more.
And I'm hazarding a guess that you're a developer or tester working on L14N or I18N. I'm also guessing you need to do more than just a few characters here or there, or you'd be satisfied with pasting from another app. So, I'll share some advice. (note: here, "you" refers to the next person to look here. I'm sure the original poster doesn't care anymore by now. :-))
If you're on Windows 10, install an appropriate keyboard driver that lets you input the characters you want into any application. I'm sure Linux has support for the same sort of thing.
E.g. I'm teaching myself Hindi (हिंदी), so I installed Windows' Hindi (Devanangari) support. I typed "Hindi", in Hindi using that support, then I switched back to US English to do the rest of this post. If all you need are accented characters from Western European languages, you can install the INTL English support and type directly in español or français or whatever.
Don't look at entering Unicode characters as entering some sort of special data amidst your English text. It's just someone else's language. Use their keyboard. Type their language.
I'm writing a flashcard app to help my learning. I'm using the Hindi keyboard support to type characters into Word, WordPad, Excel, and the Visual Studio editor. And that Hindi keyboard support works exactly the same way in all of those apps, as I'd expect it to work in just about any text editor that supports Unicode. And as you saw above, it also works in a simple text edit control in Chrome. No copy and paste. No remembering special codes. It's as ubiquitous as ctrl-F.
It looks like the unicode support in programmers editors (except for some Microsoft products) is mostly read-only. They can open a file with unicode and display the characters, but typing unicode into a file is a different story. If you want to enter unicode in a programmers editor you can copy it from somewhere else (a web page or Microsoft Word or Notepad) and paste it into the editor, but the editors make typing unicode difficult or impossible.
UltraEdit tech support referred me to this web page which explains a lot. Unfortunately none of the solutions worked with UltraEdit.
Microsoft Word and Notepad support unicode entry. Type the unicode value followed by Alt+X and it converts the hexadecimal and displays it. You can then copy and paste it into UltraEdit or one of the other programmers editors. As others have mentioned unicode support depends on support within the operating system as well as the editor.
What got me interested in using unicode in source code files is Mark Summerfield's book Programming in Go. He includes an example .go file that uses unicode. It would be great to use unicode Greek characters for variable names instead of variables named "omega" or "theta".
Using unicode in source code is a bad idea, however. Support for unicode in programmers editors is lousy, and developers would have to save or convert their source code files to utf-8 instead of ASCII. Developer's tools are just not ready to write code in unicode no matter how neat the idea sounds.
For spell checking purpose I would like to install an addictional "platform dictionary" in my Eclipse IDE.
You can see the list of platform dictionaries installed in Window > Preferences > General > Editors > Text Editors > Spelling, in the field "Platform dictionary". In my Helios Service Release 1 there are only english of UK or USA. I would to put the language of my country, so I can write comments in my language and have spell check. Eclipse help doesn't explain how.
If you can't find your language word list you can generate one using aspell.
aspell --lang=pl dump master | aspell --lang=pl expand | tr ' ' '\n' > pl.dict
In Ubuntu aspell generates list in UTF-8 in other systems you can add encoding option.
--encoding=utf-8
I am not sure you can add a "Platform dictionary", so that leaves you with a "user defined" one:
Eclipse supports a standard one-word-per-line format for the 'dictionary' file.
You can have several of those at Kevin's Word List on Sourceforge.net, including links to other sites.
If you can't find a good worldlist and can't run aspell, you can also get wordlists from Debian. On Windows, I used the Swiss German wordlist. Klick all, pick a mirror, download the .deb file, use 7-zip or similar to open it, open the data.tar inside, and find the file you are looking for. In my case it was /usr/share/dict/swiss.
Thank you #VonC, #Konrad Nowicki and #Alex Schröder for the earlier answers. I didn't find the other answers fully satisfying so I wanted to write my own answer. Your question:
How to install a platform dictionary in Eclipse?
For English:
Since you mentioned that US and UK English is included, no need to explain.
For non-English languages, like languages with strange characters like åäöü, example here is Swedish (tested and it works):
Download a text file of the words. My method for downloading a file of words: I googled swedish word list txt (just change swedish to whatever language you're looking for a dictionary and I hope you'll find a txt dictionary) and found this (this link worked 2018-09-25) GitHub repo with an Swedish dictionary: https://github.com/martinlindhe/wordlist_swedish. As long as your dictionary file is formatted correctly in UTF-8 and have EOL (End of lines) characters Unix (LF) it should be fine. However, if it isn't formatted as UTF-8 you'll have to convert the åäöü characters to reflect the UTF-8 standard, example program for this: Notepad++. If the dictionary has another EOL characters: Windows (CR LF) or Macintosh (CR), then just convert it to Unix (LF), example program for this: Notepad++. Then open a text editor, example program for this: Notepad++, to append the new dictionary to your custom dictionary where it is located, often times %userprofile%/eclipse/dictionary.txt or ~/eclipse/dictionary.txt depending on where you installed Eclipse. Restart Eclipse and it should work.
What is the secret to japanese characters in a Windows XP .bat file?
We have a script for open a file off disk in kiosk mode:
#ECHO OFF
"%ProgramFiles%\Internet Explorer\iexplore.exe" –K "%CD%\XYZ.htm"
It works fine when the OS is english, and it works fine for the japanese OS when XYZ is made up of english characters, but when XYZ is made up of japanese characters, they are getting mangled into gibberish by the time IE tries to find the file.
If the batch file is saved as Unicode or Unicode big endian the script wont even run.
I have tried various ways of encoding the japanese characters. ampersand escape does not work (〹)
Percent escape does not work %xx%xx%xx
ABC works, AB%43 becomes AB3 in the error message, so it looks like the percent escape is trying to do parameter substitution. This is confirmed because %043 puts in the name of the script !
One thing that does work is pasting the ja characters into a command prompt.
#ECHO OFF
CD "%ProgramFiles%\Internet Explorer\"
Set /p URL ="file to open: "
start iexplore.exe –K %URL%
This tells me that iexplore.exe will accept and parse the parameter correctly when it has ja characters, but not when they are written into the script.
So it would be nice to know what the secret may be to getting the parameter into IE successfully via the batch file, as opposed to via the clipboard and an environment variable.
Any suggestions greatly appreciated !
best regards
Richard Collins
P.S.
another post has has made this suggestion, which i am yet to follow up:
You might have more luck in cmd.exe if you opened it in UNICODE mode. Use "cmd /U".
Batch renaming of files with international chars on Windows XP
I will need to find out if this can be from inside the script.
For the record, a simple answer has been found for this question.
If the batch file is saved as ANSI - it works !
First of all: Batch files are pretty limited in their internationalization support. There is no direct way of telling cmd what codepage a batch file is in. UTF-16 is out anyway, since cmd won't even parse that.
I have detailed an option in my answer to the following question:
Batch file encoding
which might be helpful for your needs.
In principle it boils down to the following:
Use an encoding which has single-byte mappings for ASCII
Put a chcp ... at the start of the batch file
Use the set codepage for the rest of the file
You can use codepage 65001, which is UTF-8 but make sure that your file doesn't include the U+FEFF character at the start (used as byte-order mark in UTF-16 and UTF-32 and sometimes used as marker for UTF-8 files as well). Otherwise the first command in the file will produce an error message.
So just use the following:
echo off
chcp 65001
"%ProgramFiles%\Internet Explorer\iexplore.exe" –K "%CD%\XYZ.htm"
and save it as UTF-8 without BOM (Note: Notepad won't allow you to do that) and it should work.
cmd /u won't do anything here, that advice is pretty much bogus. The /U switch only specifies that Unicode will be used for redirection of input and output (and piping). It has nothing to do with the encoding the console uses for output or reading batch files.
URL encoding won't help you either. cmd is hardly a web browser and outside of HTTP and the web URL encoding isn't exactly widespread (hence the name). cmd uses percent signs for environment variables and arguments to batch files and subroutines.
"Ampersand escape" also known as character entities known from HTML and XML, won't work either, because cmd is also not HTML or XML. The ampersand is used to execute multiple commands in a single line.
I too suffered this frustrating problem in batch/cmd files. However, so far as I can see, no one yet has stated the reason why this problem occurs, here or in other, similar posts at StackOverflow. The nearest statement addressing this was:
“First of all: Batch files are pretty limited in their internationalization support. There is no direct way of telling cmd what codepage a batch file is in.”
Here is the basic problem. Cmd files are the Windows-2000+ successor to MS-DOS and IBM-DOS bat(ch) files. MS and IBM DOS (1984 vintage) were written in the IBM-PC character set (code page 437). There, the 8th-bit codes were assigned (or “clothed” with) characters different from those assigned to the corresponding codes of Windows, ANSI, or Unicode. The presumption of CP437 encoding is unalterable (except, as previously noted, through cmd.exe /u). Where the characters of the IBM-PC set have exact counterparts in the Unicode set, Windows Explorer remaps them to the Unicode counterparts. Alas, even Windows-1252 characters like š and ¾ have no counterpart in code page 437.
Here is another way to see the problem. Try opening your batch/cmd script using the Windows Edit.com program (at C:\Windows\system32\Edit.com). The Windows-1252 character 0145 ‘ (Unicode 8217) instead appears as IBM-PC 145 æ. A batch command to rename Mary'sFile.txt as Mary’sFile.txt fails, as it is interpreted as MaryæsFile.txt.
This problem can be avoided in the case of copying a file named Mary’sFile.txt: cite it as Mary?sFile.txt, e.g.:
xCopy Mary?sFile.txt Mary?sLastFile.txt
You will see a similar treatment (substitution of question marks) in a DIR list of files having Unicode characters.
Obviously, this is useless unless an extant file has the Unicode characters. This solution’s range is paltry and inadequate, but please make what use of it you can.
You can try to use Shift-JIS encoding.