In this link , they said we can export data into a file. However they are using ASA( SQLAnywhere ) which is different then ASE, so is there a query similar to this
SELECT * FROM SomeTable;
OUTPUT TO 'C:\temp\sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE ''
where we can run it on ASE ?
You must know that "output to" is a command only available in Interactive SQL. (you must run the isql client in order for it to work). Documentation here
The syntaxt for ASE is:
SELECT * FROM SomeTable
GO
OUTPUT TO 'C:\temp\sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE ''
Related
I have following row in AWS Redshift warehouse table.
name
-------------------
tokenauthserver2018
This I queried via simple SELECT query
SELECT name
FROM tablename
When I am trying to unload it using UNLOAD query from AWS Redshift, it is successfully finishing but giving weird quoting.
"name"
"tokenauthserver2018\
Here is my query
UNLOAD ($TABLE_QUERY$
SELECT name
FROM tablename
$TABLE_QUERY$)
TO 's3://bucket/folder'
MANIFEST VERBOSE HEADER DELIMITER AS ','
NULL AS '' ESCAPE GZIP ADDQUOTES ALLOWOVERWRITE PARALLEL OFF;
I tried unloading without ADDQUOTES as well, but got following data
name
"tokenauthserver2018
This is the query for above.
UNLOAD ($TABLE_QUERY$
SELECT name
FROM tablename
$TABLE_QUERY$)
TO 's3://bucket/folder'
MANIFEST VERBOSE HEADER CSV NULL AS '' GZIP ALLOWOVERWRITE PARALLEL OFF;
Amazon support was able to resolve this, I am posting answer here for anyone interested.
This was due to presence of NULL character \0 in my data. As I don't have control over source data, I used TRANSLATE function to replace \0 character.
SELECT
TRANSLATE("name", CHR(0), '') AS "name"
FROM <tablename>
Reference: https://docs.aws.amazon.com/redshift/latest/dg/r_TRANSLATE.html
I have a simple query that i run inside a Postgres SQL Query that runs a bunch of queries then extracts it to CSV, which [without the bunch of queries above it] is
COPY (SELECT * FROM public.view_report_teamlist_usertable_wash)
TO 'd:/sf/Reports/view_report_teamlist_usertable_wash.csv'
DELIMITER ','
CSV HEADER encoding 'UTF8' FORCE QUOTE *;
Can I alter the above at all to append the date/time [now()] to the filename? ie. 'd:/sf/Reports/view_report_teamlist_usertable_wash_2017-08-23 14:30:28.288912+10.csv'
I have googled it many times but only come up with solutions that runs it from a command line
If you want to use it once or not regularly, you could use postgres DO. But, if you use the script regularly, you should write a PL.
Either way, it should be like this:
DO $$
DECLARE variable text;
BEGIN
variable := to_char(NOW(), 'YYYY-MM-DD_HH24:MI:SS');
EXECUTE format ('COPY (SELECT * FROM public.view_report_teamlist_usertable_wash)
TO ''d:/sf/Reports/view_report_teamlist_usertable_wash_%s.csv''
DELIMITER '',''
CSV HEADER encoding ''UTF8'' FORCE QUOTE *',-- // %s will be replaced by string variable
variable -- File name
);
END $$;
-- // NOTE the '' for escaping '
EDIT: DO runs inline with other queries.
I have an anonymous function containing a query within a FOR loop that executes 100 times, and I need to save the 100 result sets as 100 files on the remote client (not on the server).
It seems like the psql \copy meta-command should be the way to do this, but I'm at a loss. Something of this form, maybe?
\copy (anonymous_function_w/_FOR_loop_here) to 'filename.txt'
where filename.txt is built from the FOR loop variable's value in each iteration. That's important - the files on the remote client need to be named based on the FOR loop's variable.
Is there any way to pull this off? I suppose an alternative approach would be to UNION all 100 query results into one big result, with the FOR loop's variable value in one field, and then use bash scripting to split it into 100 appropriately named files. But my bash skills are pretty lame. If psql can do the job directly that would be great.
EDIT: I should add that here's what the FOR loop variable looks like:
FOR rec IN SELECT DISTINCT county FROM voter.counties
so the file name would be built from rec.county + '.txt'
The typical approach to this is to use a SQL statement that generates the necessary statements, spool the output into a script file, then run that file.
Something like:
-- prepare for a "plain" output without headers or something similar
\a
\t
-- spool the output into export.sql
\o export.sql
select format('\copy (select * from some_table where county = %L) to ''%s.txt''', county, county)
from (select distinct county from voter.counties) t;
-- turn spooling off
\o
-- run the generated file
\i export.sql
So for each county name in voters.counties the export.sql will contain:
\copy (select * from some_table where county = 'foobar') to 'foobar.txt'
Is there query equivalent to sql server's openquery or openrowset to use in postgresql to query from excel or csv ?
You can use PostgreSQL's COPY
As per doc:
COPY moves data between PostgreSQL tables and standard file-system
files. COPY TO copies the contents of a table to a file, while COPY
FROM copies data from a file to a table (appending the data to
whatever is in the table already). COPY TO can also copy the results
of a SELECT query
COPY works like this:
Importing a table from CSV
Assuming you already have a table in place with the right columns, the command is as follows
COPY tblemployee FROM '~/empsource.csv' DELIMITERS ',' CSV;
Exporting a CSV from a table.
COPY (select * from tblemployee) TO '~/exp_tblemployee.csv' DELIMITERS ',' CSV;
Its important to mention here that generally if your data is in unicode or need strict Encoding, then Always set client_encoding before running any of the above mentioned commands.
To set CLIENT_ENCODING parameter in PostgreSQL
set client_encoding to 'UTF8'
or
set client_encoding to 'latin1'
Another thing to guard against is nulls, while exporting , if some fields are null then PostgreSQL will add '/N' to represent a null field, this is fine but may cause issues if you are trying to import that data in say SQL server.
A quick fix is modify the export command by specifying what would you prefer as a null placeholder in exported CSV
COPY (select * from tblemployee ) TO '~/exp_tblemployee.csv' DELIMITERS ',' NULL as E'';
Another common requirement is import or export with the header.
Import CSV to table with Header for columns present in first row of csv file.
COPY tblemployee FROM '~/empsource.csv' DELIMITERS ',' CSV HEADER
Export a table to CSV with Headers present in the first row.
COPY (select * from tblemployee) TO '~/exp_tblemployee.csv' DELIMITERS ',' CSV HEADER
I need to export content of a db2 table to CSV file.
I read that nochardel would prevent to have the separator between each data but that is not happening.
Suppose I have a table
MY_TABLE
-----------------------
Field_A varchar(10)
Field_B varchar(10)
Field_A varchar(10)
I am using this command
export to myfile.csv of del modified by nochardel select * from MY_TABLE
I get this written into the myfile.csv
data1 ,data2 ,data3
but I would like no ',' separator like below
data1 data2 data3
Is there a way to do that?
You're asking how to eliminate the comma (,) in a comma separated values file? :-)
NOCHARDEL tells DB2 not to surround character-fields (CHAR and VARCHAR fields) with a character-field-delimiter (default is the double quote " character).
Anyway, when exporting from DB2 using the delimited format, you have to have some kind of column delimiter. There isn't a NOCOLDEL option for delimited files.
The EXPORT utility can't write fixed-length (positional) records - you would have to do this by either:
Writing a program yourself,
Using a separate utility (IBM sells the High Performance Unload utility)
Writing an SQL statement that concatenates the individual columns into a single string:
Here's an example for the last option:
export to file.del
of del
modified by nochardel
select
cast(col1 as char(20)) ||
cast(intcol as char(10)) ||
cast(deccol as char(30));
This last option can be a pain since DB2 doesn't have an sprintf() function to help format strings nicely.
Yes there is another way of doing this. I always do this:
Put select statement into a file (input.sql):
select
cast(col1 as char(20)),
cast(col2 as char(10)),
cast(col3 as char(30));
Call db2 clp like this:
db2 -x -tf input.sql -r result.txt
This will work for you, because you need to cast varchar to char. Like Ian said, casting numbers or other data types to char might bring unexpected results.
PS: I think Ian points right on the difference between CSV and fixed-length format ;-)
Use "of asc" instead of "of del". Then you can specify the fixed column locations instead of delimiting.