latest .bak file in windows folder using sql script - tsql

I need help to write the sql script to find the latest backup file from the windows folder to restore the database. filename is like:-
dbnm_2019_4_5_11_30_613.bak
dbnm_2019_4_18_11_32_234.bak
dbnm_2019_4_11_11_37_34.bak
... name is made up using dbnm_year_month_date_hr_min_sec format.
used below script:-
CREATE TABLE #File
( FileName SYSNAME,
Depth TINYINT,
IsFile TINYINT
);
INSERT INTO #File
(FileName, Depth, IsFile)
EXEC xp_DirTree '[file location]',1,1;
is there anyway that I can insert date filed from the network folder to show when the backup file created and do the order by on that field to find the latest file.
when I am using top 1 in select statement, it is showing me 2019_4_5_11_30_613.bak as latest file which is incorrect.

Is there anyway that I can insert date filed from the network folder
to show when the backup file created and do the order by on that field
to find the latest file.
SQL Server recovery databases from files only from computer unitys (c: d: ).
To get the backup sets I use this statment:
SELECT
database_name as DataBaseName,
physical_device_name as PhysicalDeviceName,
backup_start_date as BackupStartDate,
backup_finish_date as BackupFinishDate,
cast(backup_size/1024.0 as decimal(19,2)) AS BackupSizeKB,
cast(backup_size/1024.0/1024.0 as decimal(19,2)) AS BackupSizeMB,
cast(backup_size/1024.0/1024/1024.0 as decimal(19,2)) AS BackupSizeGB
FROM msdb.dbo.backupset b
JOIN msdb.dbo.backupmediafamily m ON b.media_set_id = m.media_set_id
where cast(b.backup_finish_date as date)= (cast(getdate() -1 as date))
ORDER BY backup_finish_date
Pay attention to the clauses: WHERE and ORDER BY.
There is another way to get from windows folders the last file using powershell.
Take a look into Keith Hill answer here: Finding modified date of a file/folder

Related

db2 how to configure external tables using extbl_location, extbl_strict_io

db2 how to configure external tables using extbl_location, extbl_strict_io. Could you please give insert example for system table how to set up this parameters. I need to create external table and upload data to external table.
I need to know how to configure parameters extbl_location, extbl_strict_io.
I created table like this.
CREATE EXTERNAL TABLE textteacher(ID int, Name char(50), email varchar(255)) USING ( DATAOBJECT 'teacher.csv' FORMAT TEXT CCSID 1208 DELIMITER '|' REMOTESOURCE 'LOCAL' SOCKETBUFSIZE 30000 LOGDIR '/tmp/logs' );
and tried to upload data to it.
insert into textteacher (ID,Name,email) select id,name,email from teacher;
and get exception [428IB][-20569] The external table operation failed due to a problem with the corresponding data file or diagnostic files. File name: "teacher.csv". Reason code: "1".. SQLCODE=-20569, SQLSTATE=428IB, DRIVER=4.26.14
If I correct understand documentation parameter extbl_location should pointed directory where data will save. I suppose full directory will showed like
$extbl_location+'/'+teacher.csv
I found some documentation about error
https://www.ibm.com/support/pages/how-resolve-sql20569n-error-external-table-operation
I tried to run command in docker command line.
/opt/ibm/db2/V11.5/bin/db2 get db cfg | grep -i external
but does not information about external any tables.
CREATE EXTERNAL TABLE statement:
file-name
...
When both the REMOTESOURCE option is set to LOCAL (this is its default value) and the extbl_strict_io configuration parameter is set
to NO, the path to the external table file is an absolute path and
must be one of the paths specified by the extbl_location configuration
parameter. Otherwise, the path to the external table file is relative
to the path that is specified by the extbl_location configuration
parameter followed by the authorization ID of the table definer. For
example, if extbl_location is set to /home/xyz and the authorization
ID of the table definer is user1, the path to the external table file
is relative to /home/xyz/user1/.
So, If you use relative path to a file as teacher.csv, you must set extbl_strict_io to YES.
For an unload operation, the following conditions apply:
If the file exists, it is overwritten.
Required permissions:
If the external table is a named external table, the owner must have read and write permission for the directory of this file.
If the external table is transient, the authorization ID of the statement must have read and write permission for the directory of this file.
Moreover you must create a sub-directory equal to your username (in lowercase) which is owner of this table in the directory specified in extbl_location and ensure, that this user (not the instance owner) has rw permission to this sub-directory.
Update:
To setup presuming, that user1 runs this INSERT statement.
sudo mkdir -p /home/xyz/user1
# user1 must have an ability to cd to this directory
sudo chown user1:$(id -gn user1) /home/xyz/user1
db2 connect to mydb
db2 update db cfg using extbl_location /home/xyz extbl_strict_io YES

Flyway migration error with DB2 11.1 SP including pure xml DDL

I have a fairly complex Db2 V11.1 SP that will compile and deploy manually, but when i add the SQL to a migration script I get this issue
https://github.com/flyway/flyway/issues/2795
As the SP compiles and deploys manually, I am confident the SP SQL is ok.
Does anyone have any idea what the underlying issue might be
DB2 11.1
Flyway 6.4.1 (I have tried 7.x versions with same result)
the SP uses pure xml functions, so the SP SQL includes $ and # characters
I tried using obscure statement terminator chars ( ~ ^) but a simple test with Pure xml functions and # as statement terminator seemed to work
--#SET TERMINATOR #
SET SCHEMA CORE
#
CREATE OR REPLACE PROCEDURE CORE.XML_QUERY
LANGUAGE SQL
BEGIN
DECLARE GLOBAL TEMPORARY TABLE OPTIONAL_ELEMENT (
LEG_SEG_ID BIGINT,
OPTIONAL_ELEMENT_NUM INTEGER,
OPTIONAL_ELEMENT_LIST VARCHAR(100),
CLSEQ INTEGER
) ON COMMIT PRESERVE ROWS NOT LOGGED WITH REPLACE;
insert into session.optional_element
select distinct LEG_SEG_ID, A.OPTIONAL_ELEMENT_NUM, A.OPTIONAL_ELEMENT_LIST, A.CLSEQ
from core.leg_seg , XMLTABLE('$d/LO/O' passing XMLPARSE(DOCUMENT(optional_element_xml)) as "d"
COLUMNS
OPTIONAL_ELEMENT_NUM INTEGER PATH '#Num',
OPTIONAL_ELEMENT_LIST VARCHAR(100) PATH 'text()',
CLSEQ INTEGER PATH '#Seq') AS A
WHERE iv_id = 6497222690 and optional_element_xml is not null;
END
#

How can I fill in a table from a file when using flyway migration scripts

I have scripts
/*The extension is used to generate UUID*/
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- auto-generated definition
create table users
(
id uuid not null DEFAULT uuid_generate_v4 ()
constraint profile_pkey
primary key,
em varchar(255),
user varchar(255)
);
In IDE Intellij Idea (a project with Spring Boot):
src/main/resources/db-migration
src/main/resources/sql_scripts :
copy.sql
user.txt
I'm just trying to run a simple Sql command for now to see that everything works clearly
copy.sql
COPY profile FROM '/sql_scripts/user.txt'
USING DELIMITERS ',' WITH NULL AS '\null';
user.txt
'm#mai.com', 'sara'
's#yandex.ru', 'jacobs'
But when I run the copy command, I get an error
ERROR: could not open file...
Maybe who knows how it should work and what needs to be fixed ?
Strong possibility its a pathing issue; could you try, instead of
COPY profile FROM '/sql_scripts/user.txt'
doing
COPY profile FROM './sql_scripts/user.txt'
(or an absolute path)

Loading /Inserting Multiple XML files into Oracle table at once

I have a table xml_table_date. Below is the structure and sample data in the table.
But I want to insert multiple xml files (here it is 9) into the table in one go. These files resides in a DB directory
My code to insert xml file into the table:
CREATE TABLE xml_table_data (
File_name varchar2(100),
Insert_date timestamp
xml_data XMLTYPE
);
INSERT INTO xml_tab VALUES ( 'DataTransfer_HH_TWWholesale_001_004_12142020113003.xml',
XMLTYPE (BFILENAME ('TESTING', 'DataTransfer_HH_TWWholesale_001_004_12142020113003.xml'),NLS_CHARSET_ID ('AL32UTF8')));
Please help me on this.. Thanks for reading my query.
You can use an external table with preproxessing to read the filenames from the directory.
ALTER SESSION SET CONTAINER=pdb1;
CREATE DIRECTORY data_dir AS '/u02/data';
CREATE DIRECTORY script_dir AS '/u02/scripts';
CREATE DIRECTORY log_dir AS '/u02/logs';
GRANT READ, WRITE ON DIRECTORY data_dir TO demo1;
GRANT READ, EXECUTE ON DIRECTORY script_dir TO demo1;
GRANT READ, WRITE ON DIRECTORY log_dir TO demo1;
Create a list_files.sh file in the scripts-directory. Make sure oracle is the owner and the privileges are 755 on the file.
The preprocessing script file don't inherit the $PATH enviroment variable. So you have to prepend /usr/bin before all commands.
/usr/bin/ls -1 /u02/data/test*.xml | /usr/bin/xargs -n1 /usr/bin/basename
You also need a source file for the external table, but this can be an empty dummy file.
CREATE TABLE data_files
( file_name VARCHAR2(255))
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_dir
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE CHARACTERSET AL32UTF8
PREPROCESSOR script_dir: 'list_files.sh'
BADFILE log_dir:'list_files_%a_%p.bad'
LOGFILE log_dir:'list_files_%a_%p.log'
FIELDS TERMINATED BY WHITESPACE
)
LOCATION ('dummy.txt')
)
REJECT LIMIT UNLIMITED;
Now you can insert the xml data into your table.
INSERT INTO xml_table_data
( file_name,
insert_date,
xml_data
)
SELECT file_name,
SYSTIMESTAMP,
XMLTYPE (BFILENAME ('DATA_DIR', file_name), NLS_CHARSET_ID ('AL32UTF8'))
FROM data_files;
You still need to adapt the example to your environment, please.

How to upload data into a Redshift Table with a Date Format 'MMDDYYYY'

I need to upload a Data in the format 'MMDDYYYY'
current way code i am using to send via psql
SET BaseFolder=C:\
psql -h hostname -d database -c "\copy test_table(id_test,
colum_test,columndate DATEFORMAT 'MMDDYYYY')
from '%BaseFolder%\test_table.csv' with delimiter ',' CSV HEADER;"
here test_table is the table in the postgres DB
Id_test: float8
Column_test: float8
columndate: timestamp
id_test colum_test colum_date
94 0.3306 12312017
16 0.3039 12312017
25 0.5377 12312017
88 0.6461 12312017
i am getting the following error when i run the above query in CMD in windows 10
ERROR: date/time field value out of range: "12312017"
HINT: Perhaps you need a different "datestyle" setting.
CONTEXT: COPY test_table, line 1, column columndate : "12312017"
The DATEFORMAT applies to the whole COPY command, not a single field.
I got it to work as follows...
Your COPY command suggests that the data is comma-separated, so I used this input data and stored it in an Amazon S3 bucket:
id_test colum_test,colum_date
94,0.3306,12312017
16,0.3039,12312017
25,0.5377,12312017
88,0.6461,12312017
I created a table:
CREATE TABLE foo (
foo_id BIGINT,
foo_value DECIMAL(4,4),
foo_date DATE
)
Then loaded the data:
COPY foo (foo_id, foo_value, foo_date)
FROM 's3://my-bucket/foo.csv'
IAM_ROLE 'arn:aws:iam::123456789012:role/Redshift-Role'
CSV
IGNOREHEADER 1
DATEFORMAT 'MMDDYYYY'
Please note that the recommended way to load data into Amazon Redshift is from files stored in Amazon S3. (I haven't tried using the native psql copy command with Redshift, and would recommend against it — particularly for large data files. You certainly can't mix commands from the Redshift COPY command into the psql Copy command.)
Then, I ran SELECT * FROM foo and it returned:
16 0.3039 2017-12-31
88 0.6461 2017-12-31
94 0.3306 2017-12-31
25 0.5377 2017-12-31
That is a horrible format for dates. Don't break your date type, convert your data to a saner format.
=> select to_date('12312017', 'MMDDYYYY');
to_date
------------
2017-12-31