How to handle new line characters when using COPY in POSTGRESQL - postgresql

I have text that has the following form in my csv:
'0001'|'text1'|'\ntext2'|'text3'\n
However when I try to import the data into my postgres instance, it keeps breaking by thinking the first newline character is the start of a new line. Is there an easy way to tell postgres to import the newline character into the field?

If delimiters are explicitly set you avoid the trouble of special characters being interpreted, and instead are taken literally. The same thing can be said about quotes. The parser needs to know how to recognize strings to not interpret \n as a newline.
Here's the documentation:
Backslash characters () can be used in the COPY data to quote data
characters that might otherwise be taken as row or column delimiters.
In particular, the following characters must be preceded by a
backslash if they appear as part of a column value: backslash itself,
newline, carriage return, and the current delimiter character.
SO, you might have
COPY data FROM STDIN WITH CSV HEADER DELIMITER E'|' QUOTE E'\'';

Related

Trying to work around the error DF-CSVWriter-InvalidEscapeSetting

So I have a dataset which I want to export to csv with pipe as separator and no escape character.
That dataset contains in fact 4 source columns, 3 regular ones (just text) and one variable one.
That last column holds another subset of values that are also separated with a pipe.
Purpose is that the export looks like this, where the values are coming from my 4th field.
COL1|COL2|COL3|VAL1|VAL2|VAL3|....
The number of values can be different for each record but.
When I set the csv export separator to ";", I get this result which is expected
COL1;COL2;COL3;VAL1|VAL2|VAL3|....
However setting it to "|", it throws the error DF-CSVWriter-InvalidEscapeSetting.
Most likely because it detected the separator character in my 4th field and then enforces that an escape character needs to be set.
Which is a logical thing in most case but in my case I would like him to ignore this and just export as-is.
Any way how I can work around this, perhaps with a different approach or some additional settings?
Split & flatten produces extra rows but that's not what I want.
Regards,
Sven Peeters
As you have the same characters in the column value same as your delimiter character, with no escape character in your dataset will throw an error.
You have to change the delimiter character to a different character or add a Quote character and Escape character to Double quote(").
Downloaded file:

defining escape character for a csv import

I have a source file that has text columns which end with a "\" and I have specified "^" as the column delimiter.
I have the file format for this specified use - ESCAPE = 'NONE', but rows with "\^" are causing premature end-of-line errors - assuming SF is not interpreting the "\^" as a column delimiter - therefore the column count is off.
I have changed the file format to use something else for ESCAPE but get the same message. The offending rows have the right number of columns and a text column containing "\", that is not the last character in the column, imports correctly.
The values are exported from SQL Server.
Is this an escape character problem or am I overlooking something else? I am new to SF.
I was seeing this same issue. Nomatter what I used as an escape character, when it showed up in my file next to a " at the end of a string it started causing trouble.
I switched my delimiter to \u0001 which is a special "start of header" character that very rarely shows up, especially at the end of strings.
I wouldn't say this was an ideal option for us, but it worked and is something you might want to try.

postgresql - pgloader - quotes handling

I am new to postgresql and just starting to use it. I am trying to load a file into a table and facing some issues.
Sample data - the file file1.RPT contains data in the below format
"Bharath"|Kumar|Krishnan
abc"|def|ghi
qwerty|asdfgh|lkjhg
Below is the load script that is used
LOAD CSV
INTO table1
....
WITH truncate,
fields optionally enclosed by '"',
fields escaped by '"'
fields terminated by '|'
....
However, the above script is not working and is not loading any data into the table. I am not sure whats the issue here. My understanding is that first row data has to be successfully loaded (since I have given optionally enclosed by) and the second row also must be loaded (since I am trying to escape the double quote).
Request help in getting the same rectified.
Thank you.
We cannot escape and optionally quote the same character. If the double-quote will be part of the data, then it can be ignored using field not enclosed option. The default option is field optionally enclosed by double-quote.
Apparently, you're not escaping the quote in the second row, because either you must use a backslash (or another quoting character) before:
abc\"|def|ghi
or you should enclose the entire line with quote
another alternative is to accept to have quotes in the first field, then you should use the following:
fields not enclosed
in your load script

Identify hidden control character and ignore when scanning csv file

I am trying to use textscan in MATLAB to read in mixed format data from a .csv file. I am currently running into a problem that there are a number of nonvisible characters which are getting read in as a string when I am not expecting them. I believe if I set this character as a delimiter or whitespace it will solve my text scanning issue.
My main problem at the moment is that I don't know what character it is to be able to identify it. I have used isstrprop to determine that it is a control character. I guessed that it was the NUL character, so I tried adding \0 to the delimiter set for textscan. Unfortunately MATLAB does not recognize that as a valid \ constant.
Below is one line of the data file, copied from Notepad. The characters preceding each of the commas are the ones in question. The following line is the command I used in MATLAB to read it.
1 ,T,171215,173201,21.982413N,159.342881W,150 ,0 ,0 ,3D,SPS ,2.7 ,2.5 ,1.0 ,
C = textscan(fid,'%d%s%d%d%s%s%d%d%d%s%s%f%f%f%s','delimiter',',','headerlines',1,'MultipleDelimsAsOne',1)
Also, for what it's worth, using deblank on the string of characters that is read in does remove them. However, I only know how to apply this after the textscan, so the characters still throw off the parsing.
How can I identify this character and set it to be ignored by textscan?

Allowed characters in CSS 'content' property?

I've read that we must use Unicode values inside the content CSS property i.e. \ followed by the special character's hexadecimal number.
But what characters, other than alphanumerics, are actually allowed to be placed as is in the value of content property? (Google has no clue, hence the question.)
The rules for “escaping” characters are in the CSS 2.1 specification, clause 4.1.3 Characters and case. The special rules for quoted strings, as in content property value, are in clause 4.3.7 Strings. Within a quoted string, any character may appear as such, except for the character used to quote the string (" or '), a newline character, or a backslash character \.
The information that you must use \ escapes is thus wrong. You may use them, and may even need to use them if the character encoding of the document containing the style sheet does not let you enter all characters directly. But if the encoding is UTF-8, and is properly declared, then you can write content: '☺ Я Ω ⁴ ®'.
As far as I know, you can insert any Unicode character. (Here's a useful list of Unicode characters and their codes.)
To utilize these codes, you must escape them, like so:
U+27BA Becomes \27BA
Or, alternatively, I think you may just be able to escape the character itself:
content: '\➺';
Source: http://mathiasbynens.be/notes/css-escapes