db2 load csv database - db2

hi I am trying to load csv2 from server into a db2 table but couldnt get the command, right. Please help
I am now in the server folder '/project/test/' where my csv files apple.csv is
orange is a table in database where I am connected to from here with command 'db2 connect to rmidb user rmidb'
now I am connected to rmidb
so the command is:
db2 "load client from apple format delimited coldel ',' into table orange (org varchar(5), account varchar(10))"
what is not right in this code??
I tried and syntax not right but I couldnt figure our db2 documentation
how to name the input and output file

Related

Unable to export Azure MySQL table to CSV

I found more than one post saying that to accomplish the task we have to run
SELECT * INTO OUTFILE 'file.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM table
If I run this as administrator I get
Error Code: 1227. Access denied; you need (at least one of) the FILE
privilege(s) for this operation
To fix this we need to run
GRANT FILE ON *.* TO 'user'#'localhost';
where user is the admin user
If I do that I get
Error Code: 1045. Access denied for user 'user'#'%' (using password:
YES)
And here I am stuck.
Note. I tried exporting the data with Workbench. The process starts and never stops. After waiting 15 hours I had to stop it. It seems that exporting large tables (like the one I want to export) with Workbench doesn't work.
Note. It seems that Azure MySQL doesn't support INTO OUTFILE command. But there is no indication on alternatives to export data to CSV
Can someone please advise on how to export a big table out of an Azure MySQL database into a CSV file?

Creating Postgres table on AWS RDS using CSV file

I'm having this issue with creating a table on my postgres DB on AWS RDS by importing the raw csv data. Here's the few steps that I already did.
CSV file has been uploaded on my S3 bucket
Followed AWS's tutorial to give RDS permission to import data from S3
Created an empty table on postgres
Tried using pgAdmin's 'import' feature to import the local csv file into the table, but it kept giving me the error.
So I'm using this query below to import the data into the table:
SELECT aws_s3.table_import_from_s3(
'public.bayarea_property_data',
'',
'(FORMAT CSV, HEADER true)',
'cottage-prop-data',
'clean_ta_file_edit.csv',
'us-west-1'
);
However, I keep getting this message:
ERROR: extra data after last expected column
CONTEXT: COPY bayarea_property_data, line 2: ",2009.0,2009.0,0.0,,0,2019,13061.0,,0,0.0,0.0,,2019,0.0,6767.0,576040,172810,403230,70,1,,1.0,,6081,..."
SQL statement "copy public.bayarea_property_data from '/rdsdbdata/extensions/aws_s3/amazon-s3-fifo-6261-20200819T083314Z-0' with (FORMAT CSV, HEADER true)"
SQL state: 22P04
Anyone can help me with this? I'm an AWS noob, so have been struggling over the past few days. Thanks!

how to upload 900MB csv file from a website to postgresql

I want to do some data analysis from NYCopendata. The file is ~900 MB. So I am using postgresql database to store this file. I am using pgadmin4 but could not figure out how to directly store the csv in postgresl without first downloading in my machine. Any help is greatly appreciated.
Thanks.
You can use:
pgAdmin to upload a CSV file from import/export dialog
https://www.pgadmin.org/docs/pgadmin4/4.21/import_export_data.html
COPY statement on the database server
\copy command from psql on any client

How do i import .csv file from a remote server to a postgreSQL database?

The original code is a simple SQL import :
LOAD DATA LOCAL INFILE 'D:/FTP/foo/foo.csv'
INTO TABLE error_logs
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
ESCAPED BY ''
LINES STARTING BY ''
TERMINATED BY '\n'
IGNORE 1 LINES
(Server,Client,Error,Time);
I need to migrate a web portal (from SQL to Postgres[I know there are tools for that, but its not the question]) and the issue is i am no more working on local.
I didn't see anybody ask the question in this way : import .csv from a remote server to a postgres db.
I think i have to use COPY but i dont get the right syntax...
Thanks for your attention.
the copy command is an option to do this.
I had to do this once time.
How to import CSV file data into a PostgreSQL table?
Copying PostgreSQL database to another server

Create query that copies from a CSV file on my computer to the DB located on another computer in Postgres

I am trying to create a query that will copy data from a CSV file that is located on my computer to a Postgres DB that is on a different computer.
Our Postgres DB is located on another computer, and I work on my own to import and query data. I have successfully copied data from the CSV file on MY computer TO the DB in PSQL Console using the following:
\COPY table_name FROM 'c:\path\to\file.csv' CSV DELIMITER E'\t' HEADER;
But when writing a query using the SQL Editor, I use the same code above without the '\' in the beginning. I get the following error:
ERROR: could not open file "c:\pgres\dmi_vehinventory.csv" for reading: No such file or directory
********** Error **********
ERROR: could not open file "c:\pgres\dmi_vehinventory.csv" for reading: No such file or directory
SQL state: 58P01
I assume the query is actually trying to find the file on the DB's computer rather than my own.
How do I write a query that tells Postgres to look for the file on MY computer rather than the DB's computer?
Any help will be much appreciated !
\COPY is a correct way if you want to upload file from local computer (computer where you've stared psql)
COPY is correct when you want to upload on remote host from remote directory
here is an example, i've connected with psql to remote server:
test=# COPY test(i, i1, i3)
FROM './test.csv' WITH DELIMITER ',';
ERROR: could not open file "./test.csv" for reading: No such file
test=# \COPY test(i, i1, i3)
FROM './test.csv' WITH DELIMITER ',';
test=# select * from test;
i | i1 | i3
---+----+----
1 | 2 | 3
(1 row)
There are several common misconceptions when dealing with PostgreSQL's COPY command.
Even though psql's \COPY FROM '/path/to/file/on/client' command has identical syntax (other than the backslash) to the backend's COPY FROM '/path/to/file/on/server' command, they are totally different. When you include a backslash, psql actually rewrites it to a COPY FROM STDIN command instead, and then reads the file itself and transfers it over the connection.
Executing a COPY FROM 'file' command tells the backend to itself open the given path and load it into a given table. As such, the file must be mapped in the server's filesystem and the backend process must have the correct permissions to read it. However, the upside of this variant is that it is supported by any postgresql client that supports raw sql.
Successfully executing a COPY FROM STDIN places the connection into a special COPY_IN state during which an entirely different (and much simpler) sub-protocol is spoken between the client and server, which allows for data (which may or may not come from a file) to be transferred from the client to the server. As such, this command is not well supported outside of libpq, the official client library for C. If you aren't using libpq, you may or may not be able to use this command, but you'll have to do your own research.
COPY FROM STDIN/COPY TO STDOUT doesn't really have anything to do with standard input or standard output; rather the client needs to speak the sub-protocol on the database connection. In the COPY IN case, libpq provides two commands, one to send data to the backend, and another to either commit or roll back the operation. In the COPY OUT case, libpq provides one function that receives either a row of data or an end of data marker.
I don't know anything about SQL Editor, but it's likely that issuing a COPY FROM STDIN command will leave the connection in an unusable state from its point of view, especially if it's connecting via an ODBC driver. As far as I know, ODBC drivers for PostgreSQL do not support COPY IN.