db2 how to configure external tables using extbl_location, extbl_strict_io - db2

db2 how to configure external tables using extbl_location, extbl_strict_io. Could you please give insert example for system table how to set up this parameters. I need to create external table and upload data to external table.
I need to know how to configure parameters extbl_location, extbl_strict_io.
I created table like this.
CREATE EXTERNAL TABLE textteacher(ID int, Name char(50), email varchar(255)) USING ( DATAOBJECT 'teacher.csv' FORMAT TEXT CCSID 1208 DELIMITER '|' REMOTESOURCE 'LOCAL' SOCKETBUFSIZE 30000 LOGDIR '/tmp/logs' );
and tried to upload data to it.
insert into textteacher (ID,Name,email) select id,name,email from teacher;
and get exception [428IB][-20569] The external table operation failed due to a problem with the corresponding data file or diagnostic files. File name: "teacher.csv". Reason code: "1".. SQLCODE=-20569, SQLSTATE=428IB, DRIVER=4.26.14
If I correct understand documentation parameter extbl_location should pointed directory where data will save. I suppose full directory will showed like
$extbl_location+'/'+teacher.csv
I found some documentation about error
https://www.ibm.com/support/pages/how-resolve-sql20569n-error-external-table-operation
I tried to run command in docker command line.
/opt/ibm/db2/V11.5/bin/db2 get db cfg | grep -i external
but does not information about external any tables.

CREATE EXTERNAL TABLE statement:
file-name
...
When both the REMOTESOURCE option is set to LOCAL (this is its default value) and the extbl_strict_io configuration parameter is set
to NO, the path to the external table file is an absolute path and
must be one of the paths specified by the extbl_location configuration
parameter. Otherwise, the path to the external table file is relative
to the path that is specified by the extbl_location configuration
parameter followed by the authorization ID of the table definer. For
example, if extbl_location is set to /home/xyz and the authorization
ID of the table definer is user1, the path to the external table file
is relative to /home/xyz/user1/.
So, If you use relative path to a file as teacher.csv, you must set extbl_strict_io to YES.
For an unload operation, the following conditions apply:
If the file exists, it is overwritten.
Required permissions:
If the external table is a named external table, the owner must have read and write permission for the directory of this file.
If the external table is transient, the authorization ID of the statement must have read and write permission for the directory of this file.
Moreover you must create a sub-directory equal to your username (in lowercase) which is owner of this table in the directory specified in extbl_location and ensure, that this user (not the instance owner) has rw permission to this sub-directory.
Update:
To setup presuming, that user1 runs this INSERT statement.
sudo mkdir -p /home/xyz/user1
# user1 must have an ability to cd to this directory
sudo chown user1:$(id -gn user1) /home/xyz/user1
db2 connect to mydb
db2 update db cfg using extbl_location /home/xyz extbl_strict_io YES

Related

How can I fill in a table from a file when using flyway migration scripts

I have scripts
/*The extension is used to generate UUID*/
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- auto-generated definition
create table users
(
id uuid not null DEFAULT uuid_generate_v4 ()
constraint profile_pkey
primary key,
em varchar(255),
user varchar(255)
);
In IDE Intellij Idea (a project with Spring Boot):
src/main/resources/db-migration
src/main/resources/sql_scripts :
copy.sql
user.txt
I'm just trying to run a simple Sql command for now to see that everything works clearly
copy.sql
COPY profile FROM '/sql_scripts/user.txt'
USING DELIMITERS ',' WITH NULL AS '\null';
user.txt
'm#mai.com', 'sara'
's#yandex.ru', 'jacobs'
But when I run the copy command, I get an error
ERROR: could not open file...
Maybe who knows how it should work and what needs to be fixed ?
Strong possibility its a pathing issue; could you try, instead of
COPY profile FROM '/sql_scripts/user.txt'
doing
COPY profile FROM './sql_scripts/user.txt'
(or an absolute path)

Loading /Inserting Multiple XML files into Oracle table at once

I have a table xml_table_date. Below is the structure and sample data in the table.
But I want to insert multiple xml files (here it is 9) into the table in one go. These files resides in a DB directory
My code to insert xml file into the table:
CREATE TABLE xml_table_data (
File_name varchar2(100),
Insert_date timestamp
xml_data XMLTYPE
);
INSERT INTO xml_tab VALUES ( 'DataTransfer_HH_TWWholesale_001_004_12142020113003.xml',
XMLTYPE (BFILENAME ('TESTING', 'DataTransfer_HH_TWWholesale_001_004_12142020113003.xml'),NLS_CHARSET_ID ('AL32UTF8')));
Please help me on this.. Thanks for reading my query.
You can use an external table with preproxessing to read the filenames from the directory.
ALTER SESSION SET CONTAINER=pdb1;
CREATE DIRECTORY data_dir AS '/u02/data';
CREATE DIRECTORY script_dir AS '/u02/scripts';
CREATE DIRECTORY log_dir AS '/u02/logs';
GRANT READ, WRITE ON DIRECTORY data_dir TO demo1;
GRANT READ, EXECUTE ON DIRECTORY script_dir TO demo1;
GRANT READ, WRITE ON DIRECTORY log_dir TO demo1;
Create a list_files.sh file in the scripts-directory. Make sure oracle is the owner and the privileges are 755 on the file.
The preprocessing script file don't inherit the $PATH enviroment variable. So you have to prepend /usr/bin before all commands.
/usr/bin/ls -1 /u02/data/test*.xml | /usr/bin/xargs -n1 /usr/bin/basename
You also need a source file for the external table, but this can be an empty dummy file.
CREATE TABLE data_files
( file_name VARCHAR2(255))
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_dir
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE CHARACTERSET AL32UTF8
PREPROCESSOR script_dir: 'list_files.sh'
BADFILE log_dir:'list_files_%a_%p.bad'
LOGFILE log_dir:'list_files_%a_%p.log'
FIELDS TERMINATED BY WHITESPACE
)
LOCATION ('dummy.txt')
)
REJECT LIMIT UNLIMITED;
Now you can insert the xml data into your table.
INSERT INTO xml_table_data
( file_name,
insert_date,
xml_data
)
SELECT file_name,
SYSTIMESTAMP,
XMLTYPE (BFILENAME ('DATA_DIR', file_name), NLS_CHARSET_ID ('AL32UTF8'))
FROM data_files;
You still need to adapt the example to your environment, please.

Error while loading parquet format file into Amazon Redshift using copy command and manifest file

I'm trying to load parquet file using manifest file and getting below error.
query: 124138ailed due to an internal error. File 'https://s3.amazonaws.com/sbredshift-east/data/000002_0 has an invalid version number: )
Here is my copy command
copy testtable from 's3://sbredshift-east/manifest/supplier.manifest'
IAM_ROLE 'arn:aws:iam::123456789:role/MyRedshiftRole123'
FORMAT AS PARQUET
manifest;
here is my manifest file
**{
"entries":[
{
"url":"s3://sbredshift-east/data/000002_0",
"mandatory":true,
"meta":{
"content_length":1000
}
}
]
}**
I'm able to load the same file using copy command by specifying the file name.
copy testtable from 's3://sbredshift-east/data/000002_0' IAM_ROLE 'arn:aws:iam::123456789:role/MyRedshiftRole123' FORMAT AS PARQUET;
INFO: Load into table 'supplier' completed, 800000 record(s) loaded successfully.
COPY
What could be wrong in my copy statement?
This error happens when the content_length value is wrong. You have to specify the correct content_length. You could check it executing an s3 ls command.
aws s3 ls s3://sbredshift-east/data/
2019-12-27 11:15:19 539 sbredshift-east/data/000002_0
The 539 (file size) should be the same than the content_lenght value in your manifest file.
I don't know why they are using this meta value when you don't need it in the direct copy command.
¯\_(ツ)_/¯
The only way I've gotten parquet copy to work with manifest file is to add the meta key with the content_length.
From what I can gather in my error logs, the COPY command for parquet (w/ manifest) might first be reading the files using Redshift Spectrum as an external table. If that's the case, this hidden step does require the content_step which contradicts their initial statement about COPY commands.
https://docs.amazonaws.cn/en_us/redshift/latest/dg/loading-data-files-using-manifest.html

Postgres "Did not find any relation named <tablename>"

I created a new database in Postgres (Ubuntu 18.04) and created a table from the Postgres command line with:
CREATE TABLE TMB01
the command line returns with no error messages. Then I created columns from the command line (one by one, but I only had four columns names to enter).
Now I want to see the names of all tables in my database:
\d+ "TMB01"
"Did not find any relation named "TMB01."
Try it without quotes:
\d+ TMB01
"Did not find any relation named "TMB01."
Then I tried:
select * from TMB01 where false
No error message, cursor returns.
What went wrong with my table creation?
The only reason you didn't get an error with this command:
CREATE TABLE TMB01
Is that it wasn't finished yet. There's no ; at the end. At a minimum you would need:
CREATE TABLE TMB01 ();
Try granting access privileges to the postgres user grant wizard

How you set the application_name attribute for PostgreSQL in Laravel?

How do I set the application_name as defined here http://www.postgresql.org/docs/9.1/static/libpq-connect.html, in Laravel? I see that you can do "SET application_name = 'application'" but this does not work for me. I also tried setting it in the app/config/database.php file in the 'connections' array. What am I doing wrong?
You have to put in the env file (/.env) the variable regarding the application name named DB_APPLICATION_NAME = ;
And you have to specify the following:
Form Laravel version 5.5 you can add this row to the file /config/database.php at the bottom of the postgresql connection.
'application_name' => env('DB_APPLICATION_NAME', 'Laravel')
If you don't specify the app name in the .env file the application name will be taken from /config/database.php file.