I'm really stumped by trying to match Chinese characters using a Perl one liner in zsh. I canot get \p{script=Han} to match Chinese characters, but \P{script=Han} does.
Task:
I need to change this:
一
<lb/> 二
to this:
<tag ref="一二">一
<lb/> 二</tag>
There could be a variable number of tags, newlines, whitespaces, tabs, alphanumeric characters, digits, etc. between the two Chinese characters. I believe the most efficient and robust way to do this would be to look for something that is *not a Chinese character.
My attempted solution:
perl -0777 -pi -e 's/(一)(\P{script=Han}*?)(二)/<tag ref="$1$3">$2<\/tag>/g'
This has the desired effect when applied to the example above.
Problem:
The issue I am having is that \P{script=Han} (or \p{^script=Han}) matches Chinese characters as well.
When I try to match \p{script=Han}, the regex matches nothing despite it being a file full of Chinese characters. When trying to match \P{script=Han}, the regex matches every character in the file.
I don't know why.
This is a problem because in the case of this situation, the output is not as desired:
一
<lb/> 三二
becomes
<tag ref="一二">一
<lb/> 三二</tag>
I don't want this to be matched at all- just instances where 一 and 二 are separated only by characters that are not Chinese characters.
Can anyone tell me what I'm doing wrong? Or suggest a workaround? Thanks!
When I try to match \p{script=Han}, the regex matches nothing despite it being a file full of Chinese characters.
The problem is that both your script and your input file are UTF-8 encoded, but you do not say so to perl. If you do not tell perl, it will assume that they are ASCII encoded.
To say that your script is UTF-8 encoded, use the utf8 pragma. To tell perl that all files you open are UTF-8 encoded, use the -CD command line option. So the following oneliner should solve your problem:
perl -Mutf8 -CD -0777 -pi -e 's/(一)(\P{script=Han}*?)(二)/<tag ref="$1$3">$2<\/tag>/g' file
Related
I'm running cygwin under windows 10
Have a dictionary file (1-dictionary.txt) that looks like this:
labelling labeling
flavour flavor
colour color
organisations organizations
végétales végétales
contr?lée contrôlée
" "
The separators between are TABs (\ts).
The dictionary file is encoded as UTF-8.
Want to replace words and symbols in the first column with words and HTML entities in the second column.
My source file (2-source.txt) has the target UTF-8 and ASCII symbols. The source file also is encoded as UTF-8.
Sample text looks like this:
Cultivar was coined by Bailey and it is generally regarded as a portmanteau of "cultivated" and "variety" ... The International Union for the Protection of New Varieties of Plants (UPOV - French: Union internationale pour la protection des obtentions végétales) offers legal protection of plant cultivars ...Terroir is the basis of the French wine appellation d'origine contrôlée (AOC) system
I run the following sed one-liner in a shell script (./3-script.sh):
sed -f <(sed -E 's_(.+)\t(.+)_s/\1/\2/g_' 1-dictionary.txt) 2-source.txt > 3-translation.txt
The substitution of English (en-GB) words with American (en-US) words in 3-translation.txt is successful.
However the substitution of ASCII symbols, such as the quote symbol, and UTF-8 words produces this result:
vvégétales#x00E9;gvégétales#x00E9;tales)
contrcontrôlée#x00F4;lcontrôlée#x00E9;e (AOC)
If i use only the specific symbol (not the full word) I get results like this:
vé#x00E9;gé#x00E9;tales
"#x0022cultivated"#x0022
contrô#x00F4;lé#x00E9;e
The ASCII quote symbol is appended with " - it is not replaced.
Similarly, the UTF-8 symbol is appended with its HTML entity - not replaced with the HTML entity.
The expected output would look like this:
v#x00E9;g#x00E9;tales
#x0022cultivated#x0022
contr#x00F4;l#x00E9;e
How to modify the sed script so that target ASCII and UTF-8 symbols are replaced with their HTML entity equivalent as defined in the dictionary file?
I tried it, just replace all & with \& in your 1-dictionary.txt will solve your problem.
Sed's substitute uses a regex as the from part, so when you use it like that, notice those regex characters and add \ to prepare them to be escaped.
And the to part will have special characters too, mainly \ and &, add extra \ to prepare them to be escaped too.
Above linked to GNU sed's document, for other sed version, you can also check man sed.
I would like to start by saying that I am not familiar with Perl. That being said, I came across this piece of code and I could not figure out what the \x was for in the code below. In addition, I was unsure why nothing was displayed when I ran the following:
perl -e 'print "\x7c\x8e\x04\x08"'
It's not about print: it's about string representation, in which codes represent characters from your character set. For more information you should read Quote and Quote-like Operators and Effects of Character Semantics
In your case the character code is in hex. You should look in your character set table, and you may need to convert to decimal first.
You said "I was unsure why nothing was displayed when I ran the following:"
perl -e 'print "\x7c\x8e\x04\x08"'
That command outputs 4 characters to STDOUT. Each of the characters is specified in hexadecimal. The "\x7c" part will output the vertical bar character |. The other three characters are control characters, so probably wouldn't produce any visible output. If you redirect output to a file, you will end up with a 4 byte file.
It's possible that you're not seeing the vertical bar character because it's being overwritten by your command prompt. Unlike the shell echo or Python's print, Perl's print function does not automatically append a newline to all output. If you want new lines, you can insert them in the string using \n.
\x signifies the start of a hexadecimal character notation.
The length function assumes that Chinese characters are more than one character. How do I determine length of a string in Perl independent of character encoding (treat Chinese characters as one character)?
The length function operates on characters, not octets (AKA bytes). The definition of a character depends on the encoding. Chinese characters are still single characters (if the encoding is correctly set!) but they take up more than one octet of space. So, the length of a string in Perl is dependent on the character encoding that Perl thinks the string is in; the only string length that is independent of the character encoding is the simple byte length.
Make sure that the string in question is flagged as UTF-8 and encoded in UTF-8. For example, this yields 3:
$ perl -e 'print length("长")'
whereas this yields 1:
$ perl -e 'use utf8; print length("长")'
as does:
$ perl -e 'use Encode; print length(Encode::decode("utf-8", "长"))'
If you're getting your Chinese characters from a file, make sure that you binmode $fh, ':utf8' the file before reading or writing it; if you're getting your data from a database, make sure the database is returning strings in UTF-8 format (or use Encode to do it for you).
I don't think you have to have everything in UTF-8, you really only need to ensure that the string is flagged as having the correct encoding. I'd go with UTF-8 front to back (and even sideways) though as that's the lingua franca for Unicode and it will make things easier if you use it everywhere.
You might want to spend some time reading the perlunicode man page if you're going to be dealing with non-ASCII data.
I have a file like this:
my line - some words & text
oh lóok i've got some characters
I want to 'normalize' it and remove all the non-word characters. I want to end up with something like this:
mylinesomewordstext
ohlóokivegotsomecharacters
I'm using Linux on the command line at the moment, and I'm hoping there's some one-liner I can use.
I tried this:
cat file | perl -pe 's/\W//'
But that removed all the newlines and put everything one line. Is there someway I can tell Perl to not include newlines in the \W? Or is there some other way?
This removes characters that don't match \w or \n:
cat file | perl -C -pe 's/[^\w\n]//g'
#sth's solution uses Perl, which is (at least on my system) not Unicode compatible, thus it loses the accented o character.
On the other hand, sed is Unicode compatible (according to the lists on this page), and gives a correct result:
$ sed 's/\W//g' a.txt
mylinesomewordstext
ohlóokivegotsomecharacters
In Perl, I'd just add the -l switch, which re-adds the newline by appending it to the end of every print():
perl -ple 's/\W//g' file
Notice that you don't need the cat.
The previous response isn't echoing the "ó" character. At least in my case.
sed 's/\W//g' file
Best practices for shell scripting dictate that you should use the tr program for replacing single characters instead of sed, because it's faster and more efficient. Obviously use sed if replacing longer strings.
tr -d '[:blank:][:punct:]' < file
When run with time I get:
real 0m0.003s
user 0m0.000s
sys 0m0.004s
When I run the sed answer (sed -e 's/\W//g' file) with time I get:
real 0m0.003s
user 0m0.004s
sys 0m0.004s
While not a "huge" difference, you'll notice the difference when running against larger data sets. Also please notice how I didn't pipe cat's output into tr, instead using I/O redirection (one less process to spawn).
Given a directory of filenames consisting of double-byte/full-width numbers and spaces (along with some half-width numbers and underscores), how can I convert all of the numbers and spaces to single-byte characters?
For example, this filename consists of a double-byte number, followed by a double-byte space, followed by some single-byte characters:
2 2_3.ext
and I'd like to change it to all single-byte like so:
2 2_3.ext
I've tried convmv to convert from utf8 to ascii, but the following message appears for all files:
"ascii doesn't cover all needed characters for: filename"
You need either (1) normalization from Java 1.6 (java.text.Normalizer), or (2) ICU, or (3 (unlikely)) a product sold by the place I work.
What tools do you have available? There are Unicode normalisation functions in several scripting languages, for example in Python:
for child in os.listdir(u'.'):
normal= unicodedata.normalize('NFKC', child)
if normal!=child:
os.rename(child, normal)
Thanks for your quick replies, bmargulies and bobince. I found a Perl module, Unicode::Japanese, that helped get the job done. Here is a bash script I made (with help from this example) to convert filenames in the current directory from full-width to half-width characters:
#!/bin/bash
for file in *;do
newfile=$(echo $file | perl -MUnicode::Japanese -e'print Unicode::Japanese->new(<>)->z2h->get;')
test "$file" != "$newfile" && mv "$file" "$newfile"
done