How can I write vigenere cryptanalysis of persian text? - persian

How can I write Vigenere cryptanalysis for persian language?
Is there any sample source code for persian?

I was unable to find a Persian-language version of Vignere cryptanalysis.
You could try one of the solutions here and adapt the alphabet used to the persian alphabet. (You'll need persian-language letter usage statistics, it won't work with english ones.) I recommend the python but that's just personal taste.

Related

How to automatically translate between traditional and simplified Chinese characters with Unicode?

Is there a way to translate programmatically from traditional to simplified Chinese characters? If so, how do you do it, does unicode offer a way? If not, why doesn't there exist a database with the mapping, is it not one-to-one? I know you can find a mirror image glyph from another glyph in Unicode, but can you find the simplified glyph from a traditional one?
It is indeed not one to one. My favorite example to explain this quickly is this:
Take the character for face, 面. So far so good, it's the same in Traditional and Simplified Chinese. However, 面 is also the simplified version of 麵, noodle (where the 面 part on the right is the phonetic part). So if you have 面, you have no way of knowing which is which.

antlr4 and international characters

I have been using antlr4 to parse a German document and so far I have done the following to parse the text that includes German characters:
LETTERS:
[a-zA-Z_\u00DC\u00FC\u00D6\u00F6\u00C4\u00E4\u00DF]; // hex unicodes for ÜüÖöÄäß
what is the best way to describe lingual characters of all languages in Unicode in a way that antlr understands, without specifying each language/character individually? say, the french, Arabic, or Chinese, Japanese characters?
Thank you
The best way is to use character ranges corresponding to the desired Unicode classes. Even then, the result can be a bit clumsy. See this worked example.
The raw data available in the Unicode standard's Appendix tables can be stripped and munged into a usable format with just a bit too much effort. ;)

How to convert punjabi unicode to English Text?

I have records saved in SQL SERVER database in form of punjabi unicode. Now i want to convert these punjabi unicode to English Text. Is there any utility which can help me? Please reply if anyone have solution paid/free. Thanks in advance.
The question is nonsensical -- in the sense that it makes no sense.
Unicode is not a language. It merely provides a mapping from characters (more precisely, glyphs) to a binary code, in such a way that text in a font using Punjabi characters will stay that way when another font is applied. There is no "English" Unicode, and no "Punjabi" Unicode either.
You can only 'translate' from Punjabi to English using translating software. (Given the current state of automatic translation software, you are better off with a human who is fluent in both languages.)
If you wants to change Punjabi Unicode converted into English Text as example
ਨਿੱਕੀ ਕਹਾਣੀ (unicode)
in`kI khwxI (Converted into Gurmukhi Lipi, shows as English ! When you change its font into GurmukhiLipi it shows in punjabi)
You can check my website, previously in UNICODE and now in GURBANI LIPI (I have installed a plugin to convert English Letters as Punjabi)

Romanization of Unicode text

I am looking for a way to transliterate Unicode letter characters from any language into accented Latin letters. The intent is to allow foreigners to gain insight into the pronunciation of names and words written in any non-Latin script.
Examples:
Greek:Romanize("Αλφαβητικός") returns "Alphabētikós" (or "Alfavi̱tikós")
Japanese:Romanize("しんばし") returns "shimbashi" (or "sinbasi")
Russian:Romanize("яйца Фаберже") returns "yaytsa Faberzhe" (or "jajca Faberže")
It should ideally support characters in the following scripts: CJK, Indic, Cyrillic, Semitic, and Greek. It should to be data driven and extensible, using data from either the Unicode Consortium, the USA, the EU or the UN. The code should be open source written in .NET or Java.
Does such a library exist?
The problem is a lot more complex than you think.
Greek, Cyrillic, Indic scripts, Georgian -> trivial, you could program that in an hour
Thai, Japanese Kana -> doable with a bit more effort
Japanese Kanji, Chinese -> these are not alphabets/syllaberies, so you're not in fact transliterating, you're looking up the pronunciation of each symbol in a hopefully large dictionary (EDICT and CCDICT should work), and a lot of times you'll get it wrong unless you're also considering the context, especially in Japanese
Korean -> technically an alphabet, but computers can only handle the composed characters, so you need another large database, I'm not aware of any
Arabic, Hebrew -> these languages don't write down short vowels, so a lot of times your transliteration will be something unreadable like "bytlhm" (Bethlehem). I'm not aware of any large databases that map Arabic or Hebrew words to their pronunciation.
You can use Unidecode Sharp :
[a C#] port from Python Unidecode that itself port from Perl unidecode.
(there are also PHP and Ruby implementations available)
Usage;
using BinaryAnalysis.UnidecodeSharp;
.......................................
string _Greek="Αλφαβητικός";
MessageBox.Show(_Greek.Unidecode());
string _Japan ="しんばし";
MessageBox.Show(_Japan.Unidecode());
string _Russian ="яйца Фаберже";
MessageBox.Show(_Russian.Unidecode());
I hope, it will be good for you.
I am unaware of any open source solution here beyond ICU. If ICU works for you, great. If not, note that I am the CTO of a company that sells a commercial produce for this purpose that can deal with the icky cases like Chinese words, Japanese multiple reading, and Arabic incomplete orthography.
The Unicode Common Locale Data Repository has some transliteration mappings you could use.

What are the experiences with using unicode in identifiers

These days, more languages are using unicode, which is a good thing. But it also presents a danger. In the past there where troubles distinguising between 1 and l and 0 and O. But now we have a complete new range of similar characters.
For example:
ì, î, ï, ı, ι, ί, ׀ ,أ ,آ, ỉ, ﺃ
With these, it is not that difficult to create some very hard to find bugs.
At my work, we have decided to stay with the ANSI characters for identifiers. Is there anybody out there using unicode identifiers and what are the experiences?
Besides the similar character bugs you mention and the technical issues that might arise when using different editors (w/BOM, wo/BOM, different encodings in the same file by copy pasting which is only a problem when there are actually characters that cannot be encoded in ASCII and so on), I find that it's not worth using Unicode characters in identifiers. English has become the lingua franca of development and you should stick to it while writing code.
This I find particularly true for code that may be seen anywhere in the world by any developer (open source, or code that is sold along with the product).
My experience with using unicode in C# source files was disastrous, even though it was Japanese (so there was nothing to confuse with an "i"). Source Safe doesn't like unicode, and when you find yourself manually fixing corrupted source files in Word you know something isn't right.
I think your ANSI-only policy is excellent. I can't really see any reason why that would not be viable (as long as most of your developers are English, and even if they're not the world is used to the ANSI character set).
I think it is not a good idea to use the entire ANSI character set for identifiers. No matter which ANSI code page you're working in, your ANSI code page includes characters that some other ANSI code pages don't include. So I recommend sticking to ASCII, no character codes higher than 127.
In experiments I have used a wider range of ANSI characters than just ASCII, even in identifiers. Some compilers accepted it. Some IDEs needed options to be set for fonts that could display the characters. But I don't recommend it for practical use.
Now on to the difference between ANSI code pages and Unicode.
In experiments I have stored source files in Unicode and used Unicode characters in identifiers. Some compilers accepted it. But I still don't recommend it for practical use.
Sometimes I have stored source files in Unicode and used escape sequences in some strings to represent Unicode character values. This is an important practice and I recommend it highly. I especially had to do this when other programmers used ANSI characters in their strings, and their ANSI code pages were different from other ANSI code pages, so the strings were corrupted and caused compilation errors or defective results. The way to solve this is to use Unicode escape sequences.
I would also recommend using ascii for identifiers. Comments can stay in a non-english language if the editor/ide/compiler etc. are all locale aware and set up to use the same encoding.
Additionally, some case insensitive languages change the identifiers to lowercase before using, and that causes problems if active system locale is Turkish or Azerbaijani . see here for more info about Turkish locale problem. I know that PHP does this, and it has a long standing bug.
This problem is also present in any software that compares strings using Turkish locales, not only the language implementations themselves, just to point out. It causes many headaches
It depends on the language you're using. In Python, for example, is easierfor me to stick to unicode, as my aplications needs to work in several languages. So when I get a file from someone (something) that I don't know, I assume Latin-1 and translate to Unicode.
Works for me, as I'm in latin-america.
Actually, once everithing is ironed out, the whole thing becomes a smooth ride.
Of course, this depends on the language of choice.
I haven't ever used unicode for identifier names. But what comes to my mind is that Python allows unicode identifiers in version 3: PEP 3131.
Another language that makes extensive use of unicode is Fortress.
Even if you decide not to use unicode the problem resurfaces when you use a library that does. So you have to live with it to a certain extend.