Where is en_US.UTF-8 defined? - unicode

Where is the actual definition of the collating and comparison mappings for en_US.UTF-8? I assume there's some standards document, reference source code, and/or data table available somewhere?

It's Unicode.
/usr/lib/locale/en_US.utf8/LC_COLLATE is created by localedef. man localedef shows the input path /usr/share/i18n/locales.
/usr/share/i18n/locales/en_US § LC_COLLATE references file iso14651_t1, which references iso14651_t1_common, which is a file published by ISO, which tells us the originating source unidata-9.0.0.txt. Run git clone git://sourceware.org/git/glibc.git to see the history of these files.
http://enwp.org/ISO_14651 says the ISO standard and UCA are aligned, so the corresponding file at unicode.org is allkeys.txt.

Related

Removing folders with "bad" names on GitHub

I did not realize that certain characters were not allowed for folder names on github and named a couple of folders with the character ":". I cannot figure out how to rename/delete these folders. I don't care about the data inside, I can just reupload.
Anyone know how to fix this?
In general, Git is capable of handling arbitrary byte sequences in file names because it's designed for Unix systems. That means any character except forward slash or NUL can appear in a path component, including characters such as 0xfe and 0xff, which are not valid UTF-8. Colons are one of those permitted characters.
GitHub also does not have a problem with arbitrary bytes. However, if the path isn't valid UTF-8, it might not be rendered properly in the web interface, although it should still be supported via operations with Git.
However, there are some operating systems which are less capable. For example, Windows excludes many common punctuation characters from permissible file names. As a result, you may wish to be kind to users of those operating systems and not use file names that cause problems there.
Since you're on Windows, you'll have some trouble checking out the repository. The best thing to do is clone the repository on a Linux system or under Windows Subsystem for Linux and then rename the files or directories with git mv, then commit and push. macOS should also be able to handle colons in path names, although it requires that the path names be valid UTF-8.

Informatica - Is it feasible to determine & if required convert a files encoding through use of a program called via a Command Task?

Our company processes invoice data from various markets & in multiple languages. Typically the data is delivered in .txt or .dat files. The format or layout of the invoice data in these files may be common between many markets. E.g. The placement of an Invoice Number in a file from Brazil will be the same as file from Russia or a file from the United Kingdom. The encoding of the source files can vary. A file from the UK may be encoded as ASCII, a file from Brazil ANSI & a file from Russia UTF-8. This is not set in stone. Our target database is configured as UTF-8.
As the data layout between every file is fundamentally identical we would like to, if possible, process all files through the one Informatica workflow & where needed convert the file encoding at runtime.
I'm not a Java Developer but it occurred to me whether a jar could be called from a Command Task to check a files encoding and run a conversion if required.
Or should I be looking at another type of solution?
The idea is 'convert all non UTF files to UTF-8 which is highest set and can handle pretty much all character sets. So, follow below steps -
use file --mime-encoding inp_file to check encoding
use powershesll (link - superuser.com/questions/1163753/…) or UNIX/LINUX shell (link - stackoverflow.com/questions/64860/…)

How to keep BOM from removal from Perforce unicode files

I have converted entire branch with .NET and SQL sources to UTF-8 with BOM, having their Perforce file type changed to Unicode in the same operation. (Encoding difference might sound confusing, but in Perforce, Unicode file type denotes UTF-8 file content.) But later I have found out that Perforce silently elliminates BOM marker from UTF-8 files. Is it possible to set Perforce to keep UTF-8 BOM markers in files of Unicode file type? I can't find it documented.
Perforce server is switched to Unicode mode, connection encoding is UTF-8 no BOM (but changing it to UTF-8 with BOM doesn't make any difference).
Example:
check out a source file from Perforce
change file type to Unicode
convert file content to format "UTF-8 with BOM"
submit the file (now the file still keeps BOM in first 3 bytes)
remove the file from workspace
get the latest revision of the file (now the file doesn't contain BOM at the beginning)
OK, Hans Passant's comment encouraged me to re-examine P4CHARSET and finally, the answer has two parts:
For Perforce command line access, setting of P4CHARSET variable controls the behavior. To enable adding BOM to files of Unicode type, use command
p4 set P4CHARSET=utf8-bom
In order to have these files without BOM, use
p4 set P4CHARSET=utf8
For P4V The Perforce Visual Client, the setting can be changed via menu Connection > Choose Character Encoding.... Use value Unicode (UTF-8) to enable adding BOM and Unicode (UTF-8, no BOM) to suppress it.
if menu item Choose Character Encoding... is disabled, ensure the following (and then check again)
P4V has connection to server open and working
pane containing depot/workspace tree is focused (click inside to re-ensure this)
Notes:
if you usually combine both above ways to access Perforce, you need to apply both solutions, otherwise you will keep getting mixed results
if you want to instantly add/remove BOM to/from existing files, adjust the above settings, then remove files from workspace and add them again (see steps 5 and 6 of example posted in the question). Other server actions changing content of files (integrating, merging etc.) will do the similar
for other encoding options and their impact on BOM, see the second table in Internationalization Notes for P4D, the Perforce Server and Perforce client applications

Force Unicode on Data Transfer utility for iSeries AS400 for TSV tab delimited files

I am using Data Transfer utility for IBM i in order to create TSV files from my AS400s and then import them to my SQl Server Data Warehouse.
Following this: SO Question about SSIS encoding script i want to stop using conversion in SSIS task and have the data ready from the source.
I have tried using vatious codepages in TSV creation (1200 etc.) but 1208 only does the trick in half: It creates UTF8 which then i have to convert to unicode as shown in the other question.
What CCSID i have to use to get unicode from the start?
Utility Screenshot:
On IBM i, CCSID support is intended to be seamless. Imagine the situation where the table is in German encoding, your job is in English and you are creating a new table in French - all on a system whose default encoding is Chinese. Use the appropriate CCSID for each of these and the operating system will do the character encoding conversion for you.
Unfortunately, many midrange systems aren't configured properly. Their system default CCSID is 'no CCSID / binary' - a remnant of a time some 20 years ago, before CCSID support. DSPSYSVAL QCCSID will tell you what the default CCSID is for your system. If it's 65535, that's 'binary'. This causes no end of problems, because the operating system can't figure out what the true character encoding is. Because CCSID(65535) was set for many years, almost all the tables on the system have this encoding. All the jobs on the system run under this encoding. When everything on the system is 65535, then the OS doesn't need to do any character conversion, and all seems well.
Then, someone needs multi-byte characters. It might be an Asian language, or as in your case, Unicode. If the system as a whole is 'binary / no conversion' it can be very frustrating because, essentially, the system admins have lied to the operating system with respect to the character encoding that is effect for the database and jobs.
I'm guessing that you are dealing with a CCSID(65535) environment. I think you are going to have to request some changes. At the very least, create a new / work table using an appropriate CCSID like EBCDIC US English (37). Use a system utility like CPYF to populate this table. Now try to download that, using a CCSID of say, 13488. If that does what you need, then perhaps all you need is an intermediate table to pass your data through.
Ultimately, the right solution is a proper CCSID configuration. Have the admins set the QCCSID system value and consider changing the encoding on the existing tables. After that, the system will handle multiple encodings seamlessly, as intended.
The CCSID on IBM i called 13488 is Unicode type UCS-2 (UTF-16 Big Endian). There is not "one unicode" - there are several types of Unicode formats. I looked at your other question. 1208 is also Unicode UTF-8. So what exactly is meant "to get Unicode to begin with" is not clear (you are getting Unicode to begin with in format UTF-8) -- but then I read your other question and the function you mention does not say what kind of "unicode" it expects :
using (StreamWriter writer = new StreamWriter(to, false, Encoding.Unicode, 1000000))
The operating system on IBM i default is to mainly store data in EBCDIC database tables, and there are some rare applications that are built on this system to use Unicode natively. It will translate the data into whatever type of Unicode it supports.
As for SQL Server and Java - I am fairly sure they use UCS-2 type Unicode so if you try using CCSID 13488 on the AS/400 side to transfer, it may let you avoid the extra conversion from UTF-8 Unicode because CCSID 13488 is UCS-2 style Unicode.
https://www-01.ibm.com/software/globalization/ccsid/ccsid_registered.html
There are 2 CCSID's for UTF-8 unicode on system i 1208 and 1209. 1208 is UTF-8 with IBM PAU 1209 is for UTF-8. See link above.

Encoding problems with ogr2ogr and Postgis/PostgreSQL database

In our organization, we handle GIS content in different file formats. I need to put these files into a PostGIS database, and that is done using ogr2ogr. The problem is, that the database is UTF8 encoded, and the files might have a different encoding.
I found descriptions of how I can specify the encoding by adding an options parameter to org2ogr, but appearantly it doesn't have an effect.
ogr2ogr -f PostgreSQL PG:"host=localhost user=username dbname=dbname \
password=password options='-c client_encoding=latin1'" sourcefile;
The error I recieve is:
ERROR 1: ALTER TABLE "soer_vd" ADD COLUMN "målsætning" CHAR(10)
ERROR: invalid byte sequence for encoding "UTF8": 0xe56c73
HINT: This error can also happen if the byte sequence does not match the
encoding expected by the server, which is controlled by "client_encoding".
ERROR 1: ALTER TABLE "soer_vd" ADD COLUMN "påvirkning" CHAR(10)
ERROR: invalid byte sequence for encoding "UTF8": 0xe57669
HINT: This error can also happen if the byte sequence does not match the
encoding expected by the server, which is controlled by "client_encoding".
ERROR 1: INSERT command for new feature failed.
ERROR: invalid byte sequence for encoding "UTF8": 0xf8
HINT: This error can also happen if the byte sequence does not match the
encoding expected by the server, which is controlled by "client_encoding".
Currently, my source file is a Shape file and I'm pretty sure, that it is Latin1 encoded.
What am I doing wrong here and can you help me?
Kind regards, Casper
Magnus is right and I will discuss the solution here.
I have seen the option to inform PostgreSQL about character encoding, options=’-c client_encoding=xxx’, used many places, but it does not seem to have any effect. If someone knows how this part is working, feel free to elaborate.
Magnus suggested to set the environment variable PGCLIENTENCODING to LATIN1. This can, according to a mailing list I queried, be done by modifying the call to ogr2ogr:
ogr2ogr -–config PGCLIENTENCODING LATIN1 –f PostgreSQL
PG:”host=hostname user=username dbname=databasename password=password” inputfile
This didn’t do anything for me. What worked for me was to, before the call to ogr2ogr, to:
SET PGCLIENTENCODING=LATIN1
It would be great to hear more details from experienced users and I hope it can help others :)
That does sound like it would set the client encoding to LATIN1. Exactly what error do you get?
Just in case ogr2ogr doesn't pass it along properly, you can also try setting the environment variable PGCLIENTENCODING to latin1.
I suggest you double check that they are actually LATIN1. Simply running file on it will give you a good idea, assuming it's actually consistent within the file. You can also try sending it through iconv to convert it to either LATIN1 or UTF8.
You need to write your command line like this :
PGCLIENTENCODING=LATIN1 ogr2ogr -f PostgreSQL PG:"dbname=...
Currently, OGR from GDAL does not perform any recoding of character data during translation between vector formats. The team has prepared RFC 23.1: Unicode support in OGR document which discusses support of recoding for OGR drivers. The RFC 23 was adopted and the core functionality was already released in GDAL 1.6.0. However, most of OGR drivers have not been updated, including Shapefile driver.
For the time being, I would describe OGR as encoding agnostic and ignorant. It means, OGR does take what it gets and sends it out without any processing. OGR uses char type to manipulate textual data. This is fine to handle multi-byte encoded strings (like UTF-8) - it's just a plain stream of bytes stored as array of char elements.
It is advised that developers of OGR drivers should return UTF-8 encoded strings of attribute values, however this rule has not been widely adopted across OGR drivers, thus making this functionality not end-user ready yet.
On windows a command is
SET PGCLIENTENCODING=LATIN1
On linux
export PGCLIENTENCODING=LATIN1
or
PGCLIENTENCODING=LATIN1
Moreover this discussion help me:
https://gis.stackexchange.com/questions/218443/ogr2ogr-encoding-on-windows-using-os4geo-shell-with-census-data
On windows
SET PGCLIENTENCODING=LATIN1 ogr2ogr...
do not give me any error, but ogr2ogr do not works...I need to change the system variable (e.g. System--> Advanced system settings--> Environment variables -->New system variable) reboot the system and then run
ogr2ogr...
I solved this problem using this command:
pg_restore --host localhost --port 5432 --username postgres --dbname {DBNAME} --schema public --verbose "{FILE_PATH to import}"
I don't know if this is the right solution, but it worked for me.
For some reason, I dont know why, I could not import tables with ÅÄÖ in them to the public schema.
When I created a new schema I could import the tables to the new schema.