What "*#" means after executint a command in PostgreSql 10 on Windows 7? - postgresql

I'm using PostgreSQL on Windows 7 through the command line. I want to import the content of different CSV files into a newly created table.
After executing the command the database name appeared like:
database=#
Now appears like
database*# after executing:
type directory/*.csv | psql -c 'COPY sch.trips(value1, value2) from stdin CSV HEADER';
What does *# mean?
Thanks

This answer is for Linux and as such doesn't answer OP's question for Windows. I'll leave it up anyway for anyone that comes across this in the future.
You accidentally started a block comment with your type directory/*.csv. type doesn't do what you think it does. From the bash built-ins:
With no options, indicate how each name would be interpreted if used as a command name.
Try doing cat instead:
cat directory/*.csv | psql -c 'COPY sch.trips(value1, value2) from stdin CSV HEADER';
If this gives you issues because each CSV has its own header, you can also do:
for file in directory/*.csv; do cat "$file" | psql -c 'COPY sch.trips(value1, value2) from stdin CSV HEADER'; done
Type Command
The type built-in command in Bash is a way of viewing command interpreter results. For example, using it with ssh:
$ type ssh
ssh is /usr/bin/ssh
This indicates how ssh would be interpreted when you run ssh as a command in the current Bash environment. This is useful for things like aliases. As an example for this, ll is usually an alias to ls -l. Here's what my Bash environment had for ll:
$ type ll
ll is aliased to `ls -l --color=auto'
For you, when you pipe the result of this command to psql, it encounters the /* in the input and assumes it's a block comment, which is what the database*# prompt means (the * indicates it's waiting for the comment close pattern, */).
Cat Command
cat is for concatenating multiple files together. By default, it writes to standard out, so cat directory/*.csv will write each CSV file to standard out one after another. However, piping this means that each CSV's header will also be piped mid-stream of the copy. This may not be desirable, so:
For Loop
We can use for to loop over each file and individually import it. The version I have above, for file in directory/*.csv, will properly handle files with spaces. Properly formatted:
for file in directory/*; do
cat "$file" | psql -c 'COPY sch.trips(value1, value2) from stdin CSV HEADER'
done
References
PostgreSQL 10 Comments Documentation (postgresql.org)
type built-in Manual page (mankier.com)
cat Manual page (mankier.com)
Bash looping tutorial (tldp.org)

Related

Wrong copy command output with postgres

I'm trying to use the copy command to copy the content of a file into a database.
One of the lines have this:
CCc1ccc(cc1)C(=O)/N=c\1/n(ccs1)C
and when i insert this normally into database there is no errors.
But when i'm trying to use the following command, this line is not insert correctly.
cat smile_test.txt | psql -c "copy testzincsmile(smile) from stdout" teste
This i what i get (it is wrong):
CCc1ccc(cc1)C(=O)/N=c/n(ccs1)C
What's wrong here?
Thank you :)
copy expects a specific input format and cannot just be used to read random text from a file into a field.
See the manual.
The specific issue you're hitting is probably a backslash being interpreted as an escape by the default copy in/out format.
I figure out how to do this:
This is my answer:
cat smile_test.txt | sed '1d; s/\\/\\\\/g' | psql -c "copy testzincsmile(smile) from stdout" teste

How to copy a csv file from a url to Postgresql

Is there any way to use copy command for batch data import and read data from a url. For example, copy command has a syntax like :
COPY sample_table
FROM 'C:\tmp\sample_data.csv' DELIMITER ',' CSV HEADER;
What I want is not to give a local path but a url. Is there any way?
It's pretty straightforward, provided you have an appropriate command-line tool available:
COPY sample_table FROM PROGRAM 'curl "http://www.example.com/file.csv"'
Since you appear to be on Windows, I think you'll need to install curl or wget yourself. There is an example using wget on Windows here which may be useful.
My solution is
cat $file |
tail -$numberLine |
sed 's/ / ,/g' |
psql -q -d $dataBaseName -c "COPY tableName FROM STDIN DELIMITER ','"
You can insert a awk between sed and psql to add missing column.
Interesting if already you know what to put in the missing column.
awk '{print $0" , "'info_about_missing_column'"\n"}'
I have done that and it works and faster than INSERT.

sh: variable substitution with heredoc

cat "${pos}" | /usr/bin/iconv -f CP1251 -t UTF-8 | uniq | sed -En "/^CLIENT_ID.*/!p" | while read line
do
.....
......
cat >> "$TMPFILE" << EOF
INSERT INTO ......;
EOF
done
As you can see each iteration writes a SQL statement to a tmp-file.
I launched this script from a regular interactive shell and got the expected output. Launched from a cron job - nothing.
After investigating I found a problem. When I use "$TMPFILE" without "" the script works ok. Why does this happen?
OS: FreeBSD, bourne shell.
IIRC, cron doesn't source all the files that a login shell does, so you will end up with different settings for environment variables. Could be the path $TMPFILE is pointing to contains spaces when run from cron for example.
Also, on some systems (depending on setup), cron uses a different shell. So if you start your script from command line, for example /usr/bin/sh might be used, whereas when started by cron, /bin/sh is used. (I have no experience with *BSD, but I have observed this on linux.)

Postgres COPY command to tail a file?

I want to use to copy command to copy data into postgres; while there other processes are simultaneously writing into the CSV file.
Is something like this possible? Take the stdout from tail and pipe into the stdin of postgres.
COPY targetTable ( column1, column2 )
FROM `tail -f 'path/to/data.csv'`
WITH CSV
Assuming PostgreSQL 9.3 or better, there's the possibility of copying from a program output with:
COPY FROM PROGRAM 'command'
From the doc:
PROGRAM
A command to execute. In COPY FROM, the input is read from standard
output of the command, and in COPY TO, the output is written to the
standard input of the command.
This may be what you need except for the fact that tail -f being a never-ending command by design, it's not obvious how you plan for the COPY to ever finish. Presumably you'd need to replace tail -f by a more elaborate script with some exit condition.
you can also do COPY FROM STDIN;
example:
tail -f datafile.csv | psql -tc "COPY table from STDIN" database

Convert pipe delimited csv to tab delimited using batch script

I am trying to write a batch script that will query a Postgres database and output the results to a csv. Currently, it queries the database and saves the output as a pipe delimited csv.
I want the output to be tab delimited rather than pipe delimited, since I will eventually be importing the csv into Access. Does anyone know how this can be achieved?
Current code:
cd C:\Program Files\PostgreSQL\9.1\bin
psql -c "SELECT * from jivedw_day;" -U postgres -A -o sample.csv cscanalytics
postgres = username
cscanalytics = database
You should be using COPY to dump CSV:
psql -c "copy jivedw_day to stdout csv delimiter E'\t'" -o sample.csv -U postgres -d csvanalytics
The delimiter E'\t' part will get you your output with tabs instead of commas as the delimiter. There are other other options as well, please see the documentation for further details.
Using -A like you are just dumps the usual interactive output to sample.csv without the normal padding to making the columns line up, that's why you're seeing the pipes:
-A
--no-align
Switches to unaligned output mode. (The default output mode is otherwise aligned.)