My text file look like:
\home\stanley:123456789
c:/kobe:213
\tej\home\ant:222312
and create FOREIGN TABLE Steps:
CREATE FOREIGN TABLE file_check(txt text) SERVER file_server OPTIONS (format 'text', filename '/home/stanley/check.txt');
after select file_check (using: select * from file_check)
my console show me
homestanley:123456789
c:/kobe:213
ejhomeant:222312
Anyone can help me??
The file foreign-data-wrapper uses the same rules as COPY (presumably because it's the same code underneath). You've got to consider that backslash is an escape character...
http://www.postgresql.org/docs/9.2/static/sql-copy.html
Any other backslashed character that is not mentioned in the above table will be taken to represent itself. However, beware of adding backslashes unnecessarily, since that might accidentally produce a string matching the end-of-data marker (.) or the null string (\N by default). These strings will be recognized before any other backslash processing is done.
So you'll either need to double-up the backslashes or perhaps try it as a single-column csv file and see if that helps
Related
I have a very long comment I want to add to a Postgres table.
Since I do not want a very long single line as a comment I want to split it into several lines.
Is this possible? \n does not work since Postgres does not use the backslash as an escape character.
Just write a multi-line string:
COMMENT ON TABLE foo IS 'This
comment
is stored
in multiple lines';
You can also embed \n escape sequences in “extended” string constants that start with E:
COMMENT ON TABLE foo IS E'A comment\nwith three\nlines.';
You can use automatic concatenation of adjacent string literals together with E'\n' escape sequences for linebreaks:
COMMENT ON TABLE foo IS E''
'This comment is stored in multiple lines. But only some'
'end with linebreaks like this one.\n'
'You can even create empty lines to simulate paragraphs:'
'\n\n'
'This would be the second paragraph, then.';
Details:
Note the initial E'' at the end of the first line. This is essential to make all the adjacent string literals that follow it use the extended string literal syntax, providing us with the option to write \n for a linebreak. Of course, that E could also be placed into the second line instead, at the start of the real string: E'This comment …'. Me putting it into the first line is just source code aesthetics … character alignment and stuff.
I consider this solution slightly better than multi-line strings (proposed in another answer here) because it allows to fit the comment into the typical line width limit and the indentation requirements of source files. Useful when you keep your SQL in well-formatted files under version control, that is, treating it just as any other source code. When including indentation into multi-line strings, on the other hand, this results in lots of additional whitespace in the live table comment.
Note for OP: When you say "I do not want a very long single line as a comment", it is not clear if you don't want that long line in your .sql source code file, or if you don't want it in the table comment of the live table, such as when seen in a database admin tool. It does not really matter, as this solution gives you tools for both purposes: use adjacent string literals to fit your line into the source code file, without affecting line breaks in the live table comment; and use \n to create line breaks and empty lines in the live table comment.
I am new to postgresql and just starting to use it. I am trying to load a file into a table and facing some issues.
Sample data - the file file1.RPT contains data in the below format
"Bharath"|Kumar|Krishnan
abc"|def|ghi
qwerty|asdfgh|lkjhg
Below is the load script that is used
LOAD CSV
INTO table1
....
WITH truncate,
fields optionally enclosed by '"',
fields escaped by '"'
fields terminated by '|'
....
However, the above script is not working and is not loading any data into the table. I am not sure whats the issue here. My understanding is that first row data has to be successfully loaded (since I have given optionally enclosed by) and the second row also must be loaded (since I am trying to escape the double quote).
Request help in getting the same rectified.
Thank you.
We cannot escape and optionally quote the same character. If the double-quote will be part of the data, then it can be ignored using field not enclosed option. The default option is field optionally enclosed by double-quote.
Apparently, you're not escaping the quote in the second row, because either you must use a backslash (or another quoting character) before:
abc\"|def|ghi
or you should enclose the entire line with quote
another alternative is to accept to have quotes in the first field, then you should use the following:
fields not enclosed
in your load script
I've been working on an Express app that has a form designed to hold lines and quotes.
Some of the lines will have single quotes('), but overall it's able to store the info and I'm able to back it up and store it without any problems. Now, when I want do pg_dump and have the database put into an SQL file, the quotes seem to cause some things to appear a bit wonky in my text editor.
Would I have to create a method to change all the single quotation marks into double, or can I leave it as is and be able to upload it back to the database without causing major issues. I know people will continue to enter in lines that contain either single or double quotations, so I would like to know any solution or answer that would help greatly.
Single quotes in character data types are no problem at all. You just need to escape them properly in string literals.
To write data with INSERT you need to quote all string literals according to SQL syntax rules. There are tools to do that for you ...
Insert text with single quotes in PostgreSQL
However, pg_dump takes care of escaping automatically. The default mode produces text output to be re-imported with COPY (much faster than INSERT), and single quotes have no special meaning there. And in (non-default) csv mode, the default quote character is double-quote (") and configurable. The manual:
QUOTE
Specifies the quoting character to be used when a data value is quoted. The default is double-quote. This must be a single one-byte character. This option is allowed only when using CSV format.
The format is defined by rules for COPY and not by SQL syntax rules.
I am trying to upload data to RedShift using COPY command.
On this row:
4072462|10013868|default|2015-10-14 21:23:18.0|0|'A=0
I am getting this error:
Delimited value missing end quote
This is the COPY command:
copy test
from 's3://test/test.gz'
credentials 'aws_access_key_id=xxx;aws_secret_access_key=xxx' removequotes escape gzip
First, I hope you know why you are getting the mentioned error: You have a a single quote in one of the column values. While using the removequotes option, Redshift documentation clearly says that:
If a string has a beginning single or double quotation mark but no corresponding ending mark, the COPY command fails to load that row and returns an error.
One thing is certain: removequotes is certainly not what you are looking for.
Second, so what are your options?
If preprocessing the S3 file is in your control, consider using the escape option. Per the documentation,
When this parameter is specified, the backslash character (\) in input data is treated as an escape character.
So your input row in S3 should change to something like:
4072462|10013868|default|2015-10-14 21:23:18.0|0|\'A=0
See if the CSV DELIMITER '|' works for you. Check documentation here.
In my Ruby-on-Rails database.yml file, I accidentally created a PostgreSQL database with a forward slash (/) in its name.
I have been unable to remove this database via psql commands, trying with various escape sequences.
Surround your database name in quotes:
DROP DATABASE "database/withslash";
From the Identifiers and Keywords documentation:
There is a second kind of identifier: the delimited identifier or quoted identifier. It is formed by enclosing an arbitrary sequence of characters in double-quotes ("). A delimited identifier is always an identifier, never a key word. So "select" could be used to refer to a column or table named "select", whereas an unquoted select would be taken as a key word and would therefore provoke a parse error when used where a table or column name is expected. The example can be written with quoted identifiers like this:
UPDATE "my_table" SET "a" = 5;
Quoted identifiers can contain any character, except the character with code zero.
Do note that quoted identifiers are case sensitive.
You cannot drop a database while connected to that database though, so maybe you want to use the command line dropdb command. Your shell will parse the quotes, so you want to escape the quotes:
dropdb \"database/withslash\"