I'm having trouble making loads of tables that have CLOBS and BLOBS columns in a 'SQL Database' database in Bluemix.
The error returned is:
SQL3229W The field value in row "617" and column "3" is invalid. The row was
rejected. Reason code: "1".
SQL3185W The previous error occurred while processing data from row "617" of
the input file.
The same procedures performed in a local environment functioned normally.
under the command you use to load:
load client from /home/db2inst1/ODONTO/tmp/ODONTO.ANAMNESE.IXF OF IXF LOBS FROM /home/db2inst1/ODONTO/tmp MODIFIED BY IDENTITYOVERRIDE replace into USER12135.TESTE NONRECOVERABLE
The only manner currently you can upload lob files to a SQLDB or dashDB is to load the data and lobs from the cloud. The option is to get data from a Swift object storage in Softlayer or a Amazon S3 storage. You should have an account on one of those services.
After that, you can use the following syntax:
db2 "call sysproc.admin_cmd('load from Softlayer::softlayer_end_point::softlayer_username::softlayer_api_key::softlayer_container_name::mylobs/blob.del of del LOBS FROM Softlayer::softlayer_end_point::softlayer_username::softlayer_api_key::softlayer_container_name::mylobs/ messages on server insert into LOBLOAD')"
Where:
mylobs/ is the directory inside the Softlayer swift object storage container, defined in
LOBLOAD is the table name to be loaded in
Example:
db2 "call sysproc.admin_cmd('load from Softlayer::https://lon02.objectstorage.softlayer.net/auth/v1.0::SLOS424907-2:SL523907::0ac631wewqewre8af20c576ad5214ec70f163d600d247bd5d4dfef5453f72ff6::TestContainer::mylobs/blob.del of del LOBS FROM Softlayer::https://lon02.objectstorage.softlayer.net/auth/v1.0::SLOS424907-2:SL523907::0ac631wewqewre8af20c576ad5214ec70f163d600d247bd5d4dfef5453f72ff6::TestContainer::mylobs/ messages on server insert into LOBLOAD')"
Related
i am using Amazon RDS Aurora postgreSQL 10.18, i need to export a specific tables with more than 50,000 rows into csv file (either local or into s3 bucket), i have tried many procedure but ended up with fail :
i tried the button export to csv from the query editor after select all rows but the API response with too large data to return
i tried to use aws_s3.query_export_to_s3, but ERROR: (credentials stored with the database cluster can’t be accessed Hint: Has the IAM role Amazon Resource Name (ARN) been associated with the feature-name "s3Export")
i tried to take a snapshot from our instance, then export it into s3 bucket but ended up with error (The specified db snapshot engine mode isn’t supported and can’t be exported)
I have created a visual job in AWS Glue where I extract data from Snowflake and then my target is a postgresql database in AWS.
I have been able to connect to both Snowflak and Postgre, I can preview data from both.
I have also been able to get data from snoflake, write to s3 as csv and then take that csv and upload it to postgre.
However when I try to get data from snowflake and push it to postgre I get the below error:
o110.pyWriteDynamicFrame. null
So it means that you can get the data from snowflake in a Datafarme and while writing the data from this datafarme to postgres, you are failing.
You need to check was glue logs to get more understanding why is this failing while writing the data into postgres.
Please check if you have the right version of jars (needed by postgres) compatible with scala(on was glue side).
I'm new to Db2. I'm trying to send data from remote Db2 server A to remote Db2 server B using a Java based application. I was able to fetch the data from server A and get it stored in the control/data files; but when I try to send the data to server B, I get following exception.
com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=EXTERNAL;T_DATA SELECT * FROM;<table_expr>, DRIVER=4.26.14
The control file has the command:
INSERT INTO <TABLE_NAME> SELECT * FROM EXTERNAL '<PATH_TO_DATAFILE>'
USING (DELIMITER '\t' FORMAT TEXT SOCKETBUFSIZE 100 REMOTESOURCE 'JDBC')
The data file contains records where each value separated by tab per record.
Both server A and B are using Db2 v9.5
The failure was caused by the target server-B being an out of support version of Db2 (v9.5) that does not have any ability to understand external tables. Hence it reported (correctly) sqlcode -104 on the token EXTERNAL which it did not understand.
So the design is incorrect for the available Db2-versions at your site. You can only use external tables in Db2-LUW versions that are recent (v11.5).
Depending on the tools available, you can use commands (external tools, not SQL) to export data from the source, and load it into the target. Additionally, if there is network connectivity directly between server-A and server-B then an administrator can arrange federation between them allowing direct inserts.
Db2 v9.5 also supported load from cursor, and load from remote cursor (although there were problems, long since fixed in newer versions).
I am currently trying to use a dashDB database with the db2cli utility and ODBC (values are from Connect/Connection Information on the dashDB web console). At this moment I can perfectly do SELECT or INSERT statements and fetch data from custom tables which I have created, thanks to the command:
db2cli execsql -connstring "DRIVER={IBM DB2 ODBC DRIVER - IBMDBCL1}; DATABASE=BLUDB; HOSTNAME=yp-dashdb-small-01-lon02.services.eu-gb.bluemix.net; PORT=50000; PROTOCOL=TCPIP; UID=xxxxxx; PWD=xxxxxx" -inputsql /tmp/input.sql
Now I am trying to do a DB2 LOAD operation through the db2cli utility, but I don't know how to proceed or even if it is possible to do so.
The aim is to import data from a file without cataloging the DB2 dashDB database on my side, but only through ODBC. Does someone know if this kind of operation is possible (with db2cli or another utility)?
The latest API version referenced from the DB2 on Cloud (ex DashDB) dashboard is available here. It requires first to call the /auth/tokens endpoint to generate an auth token based on your Bluemix credentials to be used to authorize the API calls.
I've published recently a npm module - db2-rest-client - to simplify the usage of these operations. For example, to load data from a .csv file you can use the following commands:
# install the module globally
npm i db2-rest-client -g
# call the load job
export DB_USERID='<USERID>'
export DB_PASSWORD='<PASSWORD>'
export DB_URI='https://<HOSTNAME>/dbapi/v3'
export DEBUG=db2-rest-client:cli
db2-rest-client load --file=mydata.csv --table='MY_TABLE' --schema='MY_SCHEMA'
For the load job, a test on Bluemix dedicated with a 70MB source file and about 4 million rows took about 4 minutes to load. There are also other CLI options as executing export statement, comma separated statements and uploading files.
This is not possible. LOAD is not an SQL statement, therefore it cannot be executed via an SQL interface such as ODBC, only using the the DB2 CLP, which in turn requires a cataloged database.
ADMIN_CMD() can be invoked via an SQL interface, however, it requires that the input file be on the server -- it won't work with a file stored on your workstation.
If JDBC is an option, you could use the CLPPlus IMPORT command.
You can try loading data using REST API.
Example:
curl --user dashXXX:XXXXXX -H "Content-Type: multipart/form-data" -X POST -F loadFile1=#"/home/yogesh/Downloads/datasets/order_details_0.csv" "https://yp-dashdb-small-01-lon02.services.eu-gb.bluemix.net:8443/dashdb-api/load/local/del/dashXXX.ORDER_DETAILS?hasHeaderRow=true×tampFormat=YYYY-MM-DD%20HH:MM:SS.U"
I have used the REST API and have not seen any size limitations. In ver 1.11 of dashDB local (warehouse db) external tables have been included. As long as file is on the container it can be loaded. Also the DB2 Load locks the table until load is finished where a external table load won't
There are a number of ways to get data into Db2 Warehouse on Cloud. From a command line you can use Lift CLI https://lift.ng.bluemix.net/ which provides the best performance for large data sets
You can also use EXTERNAL TABLEs https://www.ibm.com/support/knowledgecenter/ean/SS6NHC/com.ibm.swg.im.dashdb.sql.ref.doc/doc/r_create_ext_table.html which are also high performance and have lots of options
This is a quick example using a local file (not on the server) hence the REMOTESOURCE YES option
db2 "create table foo(i int)"
echo "1" > /tmp/foo.csv
db2 "insert into foo select * from external '/tmp/foo.csv' using (REMOTESOURCE YES)"
db2 "select * from foo"
I
-----------
1
1 record(s) selected.
for large files, you can use gzip, either on the fly
db2 "insert into foo select * from external '/tmp/foo.csv' using (REMOTESOURCE GZIP)"
or from gzip'ed files
gzip /tmp/foo.csv
db2 "insert into foo select * from external '/tmp/foo2.csv.gz' using (REMOTESOURCE YES)"
I have 50 txt files on windows and I would like to insert their data into a single table on Redshift.
I created the basic table structure and now I'm having issues with inserting the data. I tried using COPY command from SQLWorkbench/J but it didn't work out.
Here's the command:
copy feed
from 'F:\Data\feed\feed1.txt'
credentials 'aws_access_key_id=<access>;aws_secret_access_key=<key>'
Here's the error:
-----------------------------------------------
error: CREDENTIALS argument is not supported when loading from file system
code: 8001
context:
query: 0
location: xen_load_unload.cpp:333
process: padbmaster [pid=1970]
-----------------------------------------------;
Upon removing the Credentials argument, here's the error I get:
[Amazon](500310) Invalid operation: LOAD source is not supported. (Hint: only S3 or DynamoDB or EMR based load is allowed);
I'm not a UNIX user so I don't really know how this should be done. Any help in this regard would be appreciated.
#patthebug is correct in that Redshift cannot see your local Windows drive. You must push the data into an S3 bucket. There are some additional sources you can use per http://docs.aws.amazon.com/redshift/latest/dg/t_Loading_tables_with_the_COPY_command.html, but they seem outside the context you're working with. I suggest you get a copy of Cloudberry Explorer (http://www.cloudberrylab.com/free-amazon-s3-explorer-cloudfront-IAM.aspx) which you can use to copy those files up to S3.